In a landmark decision, Air Canada has become the first organization to lose a lawsuit due to an AI-generated error, emphasizing the growing accountability challenges companies face as they integrate artificial intelligence into customer service. The ruling by the Civil Resolution Tribunal of British Columbia has set a precedent by holding Air Canada accountable for misleading information provided by its chatbot to a grieving passenger. Here’s how it unfolded and what it means for the future of AI liability in customer service.
The Incident: AI Misguides a Passenger
In 2022, British Columbia resident Jake Moffatt was attempting to book a last-minute flight to Ontario for his grandmother’s funeral. Uncertain about the airline’s bereavement policy, Moffatt used Air Canada’s website chatbot for clarification. The chatbot informed him that he could book his flight at a regular fare and later apply for a bereavement discount within 90 days of the purchase. Taking this advice, Moffatt booked his ticket, assuming he’d be eligible for a partial refund once the required application was submitted. However, Air Canada later denied his request, stating that its policy did not support retroactive fare reductions for bereavement travel.
Air Canada’s Defense: Shifting Liability to the Chatbot
When Moffatt challenged Air Canada’s refusal, the airline initially stood by its decision, even arguing that the chatbot should be treated as a separate legal entity responsible for its own actions. Air Canada’s defense claimed that because the chatbot had included a link to the airline’s official bereavement policy page—which clarified that refunds couldn’t be applied retroactively—Moffatt should have reviewed this information instead of solely relying on the chatbot’s advice.
The tribunal did not accept Air Canada’s argument. Tribunal member Christopher Rivers stated that it was “remarkable” for Air Canada to suggest it was not liable for information given by its chatbot. He argued that “it should be obvious to Air Canada that it is responsible for all information on its website,” regardless of the format.
The Outcome: A New Standard for AI Accountability
In February 2024, the tribunal ruled in Moffatt’s favor, awarding him $650.88 CAD in damages and refunding part of his airfare. This decision highlighted that companies deploying AI tools for customer service cannot avoid liability when AI outputs are misleading. The ruling confirmed that organizations must ensure the accuracy of AI-generated information and cannot “hide behind” their chatbots when errors occur.
After the ruling, Air Canada announced it would comply and removed the chatbot feature from its website, suggesting a reassessment of its reliance on AI-driven customer service tools.
The Bigger Picture: Implications for AI in Customer Service
This case serves as a crucial benchmark for AI accountability, signaling to companies that, as they increasingly adopt AI-driven tools, they will be held responsible for the outcomes, even when they stem from automated responses. This ruling underscores that AI tools cannot operate autonomously without human oversight, particularly in sensitive or complex service areas.
The implications go beyond just the airline industry. AI liability is now a pressing consideration across sectors using AI for customer interactions. Legal experts suggest that had the chatbot been equipped with disclaimers or explicit guidance to verify information, the ruling may have differed. The need for clear, accurate, and responsibly governed AI systems in consumer-facing roles is becoming more urgent.
Lessons for Companies: Mitigating Risks of AI Errors
Companies embracing AI need to adopt stringent guidelines to prevent similar issues. Here are several measures businesses can consider:
-
Clear Disclaimers: Ensure that chatbots and AI tools clarify when users should verify information or speak with a human representative.
-
Regular Monitoring: Conduct regular audits to detect and resolve potential inaccuracies in AI-generated responses.
-
Oversight Mechanisms: Implement human oversight to review interactions, especially for high-stakes queries.
-
Transparency and Training: Keep AI training datasets updated and train chatbots to recognize topics beyond their scope, directing users to appropriate sources.
A Cautionary Tale for AI Integration
Air Canada’s case reflects both the promise and pitfalls of AI in customer service. While AI has the potential to streamline services and improve customer experiences, errors can have costly legal and reputation impacts. The case emphasizes the importance of transparency, oversight, and responsibility in AI deployment, ensuring organizations remain accountable for AI’s role in their operations.
This decision is a strong reminder that, as AI technology evolves, companies must remain vigilant, incorporating ethical and practical safeguards to protect consumers and mitigate legal risks in the era of AI-driven customer service.
If you are an organization looking to utilize AI, contact await.ai for a demo of Await Cortex, our AI governance solution designed for highly regulated organizations.