Last week (on Valentine’s Day), the leading airline Air Canada was ordered to issue a refund to a customer who was misled by its chatbot. This case has received a fair amount of attention across the internet and social media.

The facts of the case are pretty straightforward. An Air Canada customer sought to obtain a bereavement fare for travel after the passing of his grandmother. The customer relied on information provided to him by Air Canada’s chatbot that he could apply for a refund retroactively after he purchased his ticket. When he applied for a refund, Air Canada informed him that bereavement rates would not be applicable retroactively on completed travel. The customer provided Air Canada with a screenshot of the bot’s advice and then he sued Air Canada in small claims court for the fare difference.

While Air Canada said that the correct information about bereavement fares could be found on its website and also maintained that the chatbot was a separate legal entity and was therefore not responsible for is actions, the court ruled in favor of the customer.

This “Canada Chatbot” case is an interesting one. First off, if I was providing legal advice to Air Canada, I would have advised them to provide the customer with the appropriate bereavement fare refund and to provide some suitable credit for future air travel to help avoid any potential legal claim and any associated potential negative publicity.

Here’s my thoughts on the AI-specific aspects of this case:

New AI Case Law: While this is only a small-claims court case, it shows that as AI becomes more prevalent across all industries, we will also see in increase in AI jurisprudence. We need to remember that in addition to the growing applicable AI rules and regulations, relevant legal cases will also significantly impact the development of AI law. Hopefully, lawyers and judges will increasingly understand AI in order to help shape meaningful AI law.

The Rise of Chatbots: As this case demonstrates, Air Canada, like many companies, use chatbots as a digital concierge to help serve their customers and to help enable smarter utilization of their human resources. As we see a younger generation of potential customers who have grown up with texting and using apps on their smartphone and better chatbot tools powered by AI technology will increasingly be available in the marketplace, we will see even more organizations use chatbots to help address questions from their customer base. In the legal industry there are growing opportunities for legal departments to use bots to serve their business clients, for law firms to use bots to convey relevant information with their clients and for our court systems to leverage bots to improve access to justice for citizens.

We Are Our Bots: The bots that organizations use to interact with the public are really extensions of their own organizations. They serve as an organization’s agents and representatives and it will be difficult for organizations to disclaim responsibility when their bots are supplying inaccurate information that customers are relying upon – and especially when those organizations are highly sophisticated and have “deep pockets.” Organizations that choose to use chatbots also need to carefully vet and select the providers who supply the underlying AI technology.

Proactive Chatbot Oversight: When organizations use bots to serve their customers, they need to make sure the data which they “feed” to the bot is relevant, accurate and constantly updated as they cannot act in a laissez faire manner. All organizations, including legal organizations, need to continue to properly oversee and maintain their respective chatbot solutions. For legal organizations, this active oversight function is similar to what lawyers need to do from a legal ethics perspective in overseeing and managing paralegals, legal professionals and technology tools like cloud computing.

Chatbot Transparency: If legal organizations are using chatbots to interact with the public or their clients, it’s also a good idea for those organizations to drive clarity that they are not interacting with an actual lawyer when connecting with a chatbot.

Deploying chatbots as a strategy to serve customers can offer a variety of benefits. Please make sure that you are smart and responsible when deploying chatbots.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Dennis Garcia Dennis Garcia

Dennis Garcia is an Assistant General Counsel for Microsoft Corporation based in Chicago. He practices at the intersection of law, technology and business. Prior to joining Microsoft, Dennis worked as an in-house counsel for Accenture and IBM.

Dennis received his B.A. in Political…

Dennis Garcia is an Assistant General Counsel for Microsoft Corporation based in Chicago. He practices at the intersection of law, technology and business. Prior to joining Microsoft, Dennis worked as an in-house counsel for Accenture and IBM.

Dennis received his B.A. in Political Science from Binghamton University and his J.D. from Columbia Law School. He is admitted to practice in New York, Connecticut and Illinois (House Counsel). Dennis is a Fellow of Information Privacy, a Certified Information Privacy Professional/United States and a Certified Information Privacy Technologist with the International Association of Privacy Professionals. Please follow Dennis on Twitter @DennisCGarcia and on his It’s AI All the Time Blog.