5
min read
Truth and Reality
Transport
Chat Bots
Risks of AI

Air Canada: can chatbots lie?

March 13, 2024

Jeremy Peckham - Research Lead at the AI, Faith and Civil Society Commission

Air Canada offers reduced rates for passengers traveling due to bereavement, but a recent interaction with its AI-powered chatbot has led to a significant customer dispute.

Jake Moffat, seeking information about the airline's bereavement fare policy, turned to the chatbot for help. The bot informed him that he could claim a refund retrospectively, within 90 days of purchasing a ticket. Based on this advice, Moffat booked his flight and later attempted to claim the reduced fare after the fact. However, when he reached out to Air Canada for a refund, he was told that the discounted rate should have been applied before purchasing the ticket. His request was denied.

Company refuses to honour chatbot output

Despite providing Air Canada with a screenshot of the chatbot’s response, the airline refused to accept responsibility for the information it had provided. The company insisted that it could not be held liable for the chatbot's output. Frustrated, Moffat spent two and a half months attempting to resolve the issue, eventually escalating the matter to the Civil Resolutions Tribunal.

"In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website." [Civil Resolutions Tribunal Feb 2024]

Tribunal rule against Air Canada

The airline argued that the chatbot’s response to Moffat’s query included a link to the company’s policy that states that requests for a discounted fare cannot be claimed after purchasing a ticket. However, the tribunal ruled against Air Canada, finding that the company failed to take reasonable care to ensure the chatbot’s information was accurate.

"I find Air Canada did not take reasonable care to ensure its chatbot was accurate."
"Negligent misrepresentation can arise when a seller does not exercise reasonable care to ensure its representations are accurate and not misleading" [Civil Resolutions Tribunal Feb. 2024]

Trustworthiness of AI chatbots in question

A key issue raised by this case is whether chatbots based on Generative AI and Large Language Models can be relied upon. Other studies also show that Generative AI is very error prone, sometimes referred to as ‘hallucinations’. The tribunals ruling shows that companies should be accountable for the outputs of generative AI. 

"While Air Canada argues Mr. Moffatt could find the correct information on another part of its website, it does not explain why the webpage titled 'Bereavement travel' was inherently more trustworthy than its chatbot. It also does not explain why customers should have to double-check information found in one part of its website on another part of its website." [Civil Resolutions Tribunal Feb. 2024]

Human Values Risk Analysis:

Truth & Reality - HIGH RISK
AI chatbots can provide inaccurate information. Such errors can mislead users into making decisions based on faulty advice, undermining trust in the technology.
Privacy & Freedom: HIGH RISK
AI systems can collect personal and copyright data, posing risks to privacy. Without proper safeguards, users may unknowingly expose sensitive information, and the data could be misused or compromised, leading to breaches of privacy and security.
Cognition & Creativity: MEDIUM RISK
Relying on AI chatbots may reduce users' critical thinking and problem-solving abilities. As seen in this case, the customer trusted the chatbot's response without verifying it, which could encourage passive reliance on automated systems and discourage independent thinking.
Authentic Relationships- MEDIUM RISK
AI chatbots replace human interaction, which can erode authentic relationships in customer service. Misunderstandings or frustration, as experienced by Moffat, might be avoided with human agents who can empathise and resolve issues more effectively.
Dignity of Work: MEDIUM RISK
As AI chatbots replace customer service agents, there is a risk of job displacement, undermining the dignity of work. Overreliance on automation could diminish the need for human employees, affecting job satisfaction and security.
Moral Autonomy: LOW RISK
No direct impact on moral autonomy

Policy Recommendations

The case raises significant concerns about the use of AI in customer service and the potential for misleading or incorrect information. It also emphasises the need for clear policies and accountability regarding AI outputs, especially when they affect consumer rights.  

1. Organisations that deploy chatbots for public or client use must be held accountable for their outputs, even when these outputs contain errors or conflict with other publicly available information. Legislation may be required to establish clear 'product' liability, particularly when the chatbot itself is considered the product.

2. Copyright protections should be strictly enforced, with no exceptions made for AI companies.

References

https://decisions.civilresolutionbc.ca/crt/crtd/en/item/525448/index.do

G