What is meant by Artificial Intelligence & how does it differ from ‘automation’?

Artificial Intelligence (AI) can be said to be the ‘ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.’1 Whereas automation generally refers ‘to the use of machines and computers that can operate without human controls.’2 From the foregoing, we can say that the two terms are associated but that AI is more about thinking like a human, whereas automation is more about doing the activities that follow from human or machine thinking. In this note, we distinguish AI from automation.

Possible applications of AI to Travel Risk Management (TRM)

The role of employers in TRM is to discharge their duty of care through identifying foreseeable travel risks that could affect their travelling workers, to assess those risks and take reasonable steps to prevent or mitigate them. Equally, where circumstances arise that weren’t reasonably foreseeable, the employer is expected to engage with their traveller to advise and assist them about how to ensure their safety, security, health and well-being when faced with dynamic events. The questions arise as to how much of this process could be undertaken or supported by AI and how much of it should be automated?

It appears feasible that the process of identifying travel risk factors (e.g., crime, political unrest, terrorism, climate hazards, infection, available health care) is amenable to automation and the assessment thereof, i.e., how likely is it that one or more of those factors is relevant, with an associated degree of likelihood and impact, seems amenable to the application of AI. We could speculate that with the risk factors identified, the appropriate mitigation information/briefing could be delivered at the same time.

In respect of pre-travel assessment of a worker’s fitness for travel, there would properly be concern about handling sensitive personal information might be handled in an AI environment. However, few organisations seek disclosures from travellers about this, meaning employers’ predominant approach is ‘don’t’ ask; don’t tell’ which leaves it to the discretion of travellers as to what confidential information they may wish to raise, if any, so no AI dilemma arises in that there is no existing process to automate. But the use of apps, confidentially, to help travellers ask and answer honestly questions about themselves and their physical and emotional fitness for the business travel may be a future consideration. Armed with a reasonable self-assessment, the traveller could decide whether to seek medical advice or assistance to mitigate identified travel health disclosures.

Is there a role for AI where a traveller experiences difficulties during business travel – everything from becoming ill to being the victim of a crime? How might a traveller feel if they were directed to a medical or security ‘chatbot’ seeking information and trying to identify the assistance or support the traveller might need? In our experience, the advice to business travellers facing an immediate crisis is to contact the in-situ emergency services, where available. Where organisations operate with third-party security and/or medical providers, or security operations centres or equivalents, it is expected these centres act as the means to triage the reported incident or event – its relative seriousness, the assistance required, the resource entailed in delivering that assistance, and the timescale in which that assistance should be available to have beneficial effect. Whilst it is likely that the assessment of the situation and sensible suggestions as to resolution, with the triggering of resource mobilisation, could be undertaken through AI, prospective users of an AI-based service might feel uncertain and uncomfortable with the notion. AI development may not be sufficiently mature to provide managers and users with sufficient confidence right now.

Regulatory environment

In August 2024, the EU introduced a new Directive for all Member States about the use of AI. The EU has provided a Q&A document describing the different levels of risk that pertain to what AI is used for. At first glance, it would appear that use of AI in TRM may not hit the ‘high-risk’ category but as TRM includes the health, safety and security of travellers, that view may be challengeable. Uncertain and uncomfortable might reflect some of the opinions and views held on the subject of AI, but no environment – including TRM – stands still. For example, the desire of many businesses to reduce carbon emissions is already biting into business travel, particularly long haul. Reducing travel may also see downward pressure on TRM budgets and encourage further thinking about the role of AI in travel risk, safety and security considerations and decision making.

Risks and considerations

However, before coming to any conclusions, consideration should also be given to some of the challenges in respect of the use of AI in TRM development:

  • How would AI be ‘trained’ with sufficient data in TRM, particularly noting the dynamic environment in which TRM decision-making can often arise?
  • How would AI use in TRM fit the legal and regulatory environment in different jurisdictions?
  • How would quality assurance be maintained?
  • Is an intelligent search engine (whether of internal-only or moderated external sources, or both) a more appropriate approach?

Survey

Click here to participate in a survey on future perspectives regarding AI’s potential role in Travel Risk Management.

References

  1. Artificial intelligence (AI) | Definition, Examples, Types, Applications, Companies, & Facts | Britannica accessed 21 July 2024
  2. AUTOMATION | English meaning – Cambridge Dictionary accessed 21 July 2024