This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take on AI

| 1 minute read

Chatbots and Liability

Air Canada was recently held liable by British Columbia Civil Resolution Tribunal (CRT) for hallucinations from its chatbot in Moffatt v. Air Canada. One of Air Canada's arguments was that it could not be held liable for information provided by its chatbot because it was a “separate legal entity” responsible for its own actions. As you could imagine this did not go over well, and the decision noted “[i]t should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot." The decision found that “Air Canada did not take reasonable care to ensure its chatbot was accurate.” The damages in this case were small (CA$650), but this case provides a cautionary tale for companies using chatbots as the damages could have been more severe if the hallucinations were more widespread.  

Recently, in the U.S., the FTC placed guardrails on Rite Aid’s use of AI with biometric scanning with facial recognition. While the FTC’s order is specific to biometric scanning and bias/discriminatory outcomes of that scanning which is a higher risk use case than a chatbot, the FTC provided some guidance on what it expects from companies in terms of checking for inaccurate outputs from AI systems. For example, the FTC provided that Rite Aid should document the testing of its AI system prior to deployment and at least every twelve months thereafter to identify factors that cause or contribute to inaccurate outputs, and assess any statistically significant variations in the system’s rate of inaccurate outputs.

It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.