Jen Neal, Contributing Member 2024-2025
Intellectual Property and Computer Law Journal
I. Introduction
This blog explores the potential dangers that unchecked AI chatbots can pose to both consumers and businesses alike. Part II provides background on AI chatbot development, use, and recent examples of chatbot harms. Part III discusses the legal implications and recent case law regarding AI chatbots and who could be held accountable for AI chatbot harms and abuse. Finally, Part IV concludes that developers and owners of AI chatbots should be held to a higher standard for the dangerous impacts of their inventions.
II. Background
The evolution of artificial intelligence (AI) chatbots is a remarkable journey that spans decades, driven by advancements in both computer science and natural language processing (NLP). The concept of creating machines capable of understanding and simulating human language dates back to the 1950s when pioneering figures like Alan Turing proposed theoretical frameworks such as the Turing Test, which set the stage for evaluating a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human.[1]
The first notable breakthrough in chatbot development came in 1966 with the introduction of ELIZA, created by Joseph Weizenbaum at the Massachusetts Institute of Technology.[2] ELIZA was a rudimentary program designed to simulate a conversation between a user and a psychotherapist.[3] Despite its simplistic rule-based approach, ELIZA showcased the AI’s potential to engage in meaningful, albeit limited, dialogue.
The following decades saw incremental improvements in chatbot technology, with more sophisticated programs emerging in the 1980s and 1990s.[4] One of the most influential systems of this period was ALICE (Artificial Linguistic Internet Computer Entity), which utilized pattern matching and a larger knowledge base to facilitate more fluid interactions.[5] ALICE’s architecture and design influenced subsequent chatbot development and became a foundation for many early conversational agents.[6]
In the early 2000s, with the rise of machine learning techniques and more advanced NLP models, chatbots began to evolve beyond simple scripted interactions.[7] Companies such as Google, Apple, and Amazon introduced digital assistants like Siri, Google Assistant, and Alexa, marking a shift towards AI systems capable of more natural, context-aware conversations.[8] These assistants leveraged large-scale data processing and machine learning algorithms, allowing them to improve over time based on user interactions.[9]
The most recent advancements in chatbot technology have been driven by the development of deep learning models, particularly the rise of transformer-based architectures such as OpenAI’s GPT series and Google’s BERT.[10] These models are capable of understanding complex language structures, providing more coherent and contextually relevant responses.[11]
Today, AI chatbots are integrated into a multitude of industries, from customer service to healthcare, finance, and entertainment.[12] With continued advancements in AI research, particularly in areas like multimodal learning and reinforcement learning, chatbots are expected to become increasingly sophisticated, bridging the gap between human-like communication and machine-driven interactions.[13] With such rapid advances comes unexpected drawbacks to the technology.
Though chatbots are increasingly sophisticated, they still make frequent, notable mistakes. When businesses prematurely rely on AI to communicate with customers rather than utilizing the technology with human supervision, mistakes become unavoidable roadblocks. For example, McDonald’s attempted to use AI to take drive-through orders to increase efficiency, but it had the opposite effect.[14] The AI ordering system had consistent problems understanding customer orders, adding excessive amounts of chicken nuggets and other items customers had not ordered.[15]
Another instance of dangerous chatbot mistakes became apparent when New York City attempted to use a Microsoft chatbot to provide residents with guidance on starting and operating businesses.[16] The information provided, despite being presented as factual, was not only incomplete but often “dangerously inaccurate.”[17] In some cases, the chatbot’s recommendations were against the law, recommending discrimination or unlawfully taking employee’s tips.[18]
AI tools have also been linked to darker circumstances than mere mistakes in a drive-through, becoming an instrument criminals use to exploit children. In 2023, the attorneys general of 54 states and US territories signed a letter to Congressional leaders warning of the growing danger of unchecked AI being used to exploit children and encouraging legislation to take action against these dangers.[19] The letter references AI generated deep-fakes of child sex abuse material.[20] As many AI tools are open source, virtually anyone is able to generate these disturbing images without consequence as the tools are largely unpoliced and unrestricted.[21] AI has also advanced beyond image generation and is now capable of imitating children’s voices.[22] The letter highlights AI scams that target parents and grandparents by using voice-mimicking technology to impersonate their children and fake kidnappings.[23]
AI chatbots not only cause harm through their systematic mistakes, but also when they are functioning as intended. AI chatbots have been implicated in harmful “relationships” with minors. In February of 2024, a 14-year-old boy in Florida committed suicide shortly after sending a message to an AI chatbot that he had been engaging with in a hypersexualized manner.[24] The boy told the chatbot he was, “coming home,” and the chatbot replied, “please come home to me as soon as possible, my love.”[25] The boy’s mother sued the creator of the chatbot, but the platform remains active and popular with a reported 28 million users as of August 2024.[26]
While advances in AI chatbot technology bring the promise of efficiency and entertainment, the landscape of AI chatbots remains somewhat of a “wild west.” Businesses that attempt to prematurely adopt AI chatbots to increases efficiency have found that it causes more problems.[27] Unrestricted and unmonitored use of AI chatbots and other tools can also be linked to serious harms to children, either by facilitating abuse or contributing to children’s mental health crises via direct communication.[28] The severity of the consequences of prematurely adopting AI chatbots or allowing public, unrestricted use of AI chatbots is indicative that the cart has been put before the horse. There should be more restrictions on AI chatbots in these early stages of development before the programs are rolled out to the general public. As the legislature has not adopted effective restrictions, a multitude of lawsuits have been brought to address the consequences of AI chatbots.
III. Discussion
As discussed in the Attorney’s General letter to Congress, many forms of AI remain unregulated and allow for abuse or misuse.[29] Since then, Congress’s AI research taskforce has released a report recommending a regulatory framework for future legislation regarding AI.[30] Many recommendations for the shortcomings of AI use in highly consequential decisions include having a human in the loop to ensure the decisions are not discriminatory.[31] Though this appears to be sound advice, actually implementing human intervention within AI use may be difficult to regulate as AI is often used to simplify tasks to save a human time. As this report was only recently released, there is still no formal federal legislation regulating AI, however, a handful of states have adopted resolutions or enacted AI legislation.[32]
Despite a lack of legislation, there are many active lawsuits regarding AI chatbots. A multitude of companies have adopted AI chatbots in the place of human customer service agents.[33] While the use of AI chatbots is convenient for these companies, it poses legal issues when the chatbots “hallucinate” and provide incorrect and often illegal advice.[34] For example, the misinformation provided by a customer service chatbot has put Air Canada in legal jeopardy.[35] Air Canada’s chatbot promised a bereavement discount to a passenger, assuring him that he could book a flight for full price to a funeral and then apply for the discount after the fact.[36] However, unbeknownst to the passenger, Air Canada’s policy required the bereavement request to be submitted before the flight. Subsequently, the airline refused to honor the discount.[37] When the passenger brought legal action against the airline, the airline claimed the chatbot was, “a separate legal entity that is responsible for its own actions.”[38]
The court in British Columbia disagreed with the airline, requiring Air Canada pay the passenger $812.02 in damages and fees.[39] Air Canada argued the passenger should have checked the correct policy on their website provided in a link from the chatbot. The court found Air Canada was responsible for all information on its website, including that provided by the chatbot.[40] This case, despite being decided in Canadian courts, marks a landmark decision holding companies accountable for the information provided by their AI chatbots. This decision should encourage companies to better quality control the information their chatbots provide or cease the use of chatbots until the technology has improved.
AI chatbots have also been at the center of more severe lawsuits involving correspondence with children.[41] The aforementioned lawsuit involving the Florida teen who allegedly committed suicide at the encouragement of an AI chatbot is not the only lawsuit of the kind. [42] The mother of a 17-year-old autistic Texas teen filed a lawsuit against the AI chatbot company, Character.AI.[43] Allegedly, the teen turned to the chatbot for support and connection and instead received encouragement of self-harm and violence toward his family.[44]
The mother of the Florida teen who committed suicide alleges Character.AI allows minors unrestricted access to “life-like” AI chatbot companions without sufficient precautions.[45] The mother also alleges that the AI chatbots are designed to be addictive as to increase user engagement and lead vulnerable users into inappropriate and often sexual conversations.[46] The lawsuit claims Character.AI utilizes chatbots that expose minors to sexual, violent, or otherwise inappropriate relationships.[47]
Though the litigation against Character.AI is still ongoing as of April 2025, the platform has made some changes to protect minors using its chatbots without restriction.[48] In November of 2024, Character.AI announced the addition of new safety initiatives for users under the age of 18.[49] These changes include a separate chatbot model for minors with stricter guidelines, improvements in intervention related to use of chatbots that violate their terms of service and community guidelines, a revised disclaimer on all chats to remind users the chatbot is not a real person, and use time notification after an hour.[50]
These changes are a step in the right direction in restricting minors using harmful chatbots, but there are still issues with this solution. The most obvious issue being that Character.AI has included nothing to verify the age of its users allowing for minors to lie about their age and access the less restricted chatbots. The changes are more like liability protections for the company than actual interventions to protect children. The result of the pending litigation should shed light on the legal repercussions companies like Character.AI should face for allowing unfettered access to role-play chatbots.
IV. Conclusion
AI chatbots have increased in popularity in both business and personal realms. Despite rampant mistakes and hallucinations, businesses still employ many AI chatbots to field customer service complaints. Recent court rulings holding companies liable for the promises made by their chatbots should encourage companies to increase oversight in their use to reduce mistakes that can detrimentally affect consumers.
AI chatbots have also been linked to various harms to minors. The unrestricted access and unfettered communications between chatbots and minors have led to mental health crises in multiple teens and enabled adults to create abuse material involving children. In both the business and personal realms, the lack of restrictions to AI chatbots has posed severe consequences. The companies using and producing these chatbots should take action to restrict chatbots before more harm is caused until the technology is improved and legal guidelines are in place.
[1] Graham Oppy and David Dowe, The Turing Test, Stanford Encyclopedia of Philosophy (Oct 4, 2021), https://plato.stanford.edu/entries/turing-test/ [perma.cc/678C-8X5P].
[2] Ben Tarnoff, Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI, The Guardian (Jul. 25, 2023), https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai [https://perma.cc/U97H-MNJQ].
[3] Id.
[4] Marie Gobiet, The History of Chatbots, Onlim (Feb. 24, 2024), https://onlim.com/en/the-history-of-chatbots/ [https://perma.cc/NMN4-HD45].
[5] Id.
[6] Id.
[7] Id.
[8] Id.
[9] Iqbal H. Sarker, Machine Learning: Algorithms, Real-World Applications and Research Directors, Springer Nature (Mar 22, 2021), https://link.springer.com/article/10.1007/s42979-021-00592-x [https://perma.cc/NSK2-LTEL].
[10] Kuriko Iwai, NPL Application – Building AI Chatbot Using Transformer Models and LangChain, Linkedin (Apr. 16, 2024), https://www.linkedin.com/pulse/nlp-application-building-ai-chatbot-using-transformer-kuriko-iwai-cgqkc/ [perma.cc/KCL5-CNLY].
[11] Id.
[12] What is a chatbot?, IBM, https://www.ibm.com/think/topics/chatbots [https://perma.cc/5VRA-UMBS].
[13] Sarker, surpa note 9.
[14] Thor Olavsrud, 12 Famous AI Disasters, CIO (Oct. 2, 2024), https://www.cio.com/article/190888/5-famous-analytics-and-ai-disasters.html [https://perma.cc/85AB-G6M4].
[15] Id.
[16] Colin Lecher, NYC’s AI Chatbot Tells Businesses to Break the Law, The Markup (Mar. 29, 2024), https://themarkup.org/news/2024/03/29/nycs-ai-chatbot-tells-businesses-to-break-the-law [https://perma.cc/F788-BYDH].
[17] Id.
[18] Id.
[19] Letter from Dave Yost et al., Artificial Intelligence and the Exploitation of Children, National Association of Attorneys General, to Pat Murry et al., available athttps://ncdoj.gov/wp-content/uploads/2023/09/54-State-AGs-Urge-Study-of-AI-and-Harmful-Impactson-Children.pdf [https://perma.cc/A3LK-7PRP].
[20] Yost, supra note 19.
[21] Id.
[22] Id.
[23] Id.; see also Louis DeNicola, What are AI Scams?, Experian (Apr. 5, 2024), https://www.experian.com/blogs/ask-experian/what-are-ai-scams/ [https://perma.cc/ESB5-38D3].
[24] Kate Payne, An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges, AP News (Oct. 25, 2024), https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0 [https://perma.cc/TL84-FHRK].
[25] Id.
[26] David Curry, Character.ai revenue and usage statistics, Business of Apps (Jan. 22, 2025), https://www.businessofapps.com/data/character-ai-statistics/#:~:text=AI-,character.ai%20Users,used%20chatbots%20in%20the%20world. [https://perma.cc/E5LN-AK8R].
[27] Nilesh Patel et al., AI Chatbots, Hallucinations, and Legal Risks, Frost Brown Todd (Apr. 15, 2024), https://frostbrowntodd.com/ai-chatbots-hallucinations-and-legal-risks/ [https://perma.cc/9R96-Z49C].
[28] Yost, supra note 19.
[29] Yost, supra note 19.
[30] Tony Samp et al., House AI taskforce unveils report with focus on sectoral regulatory framework, DLA Piper (Dec. 19, 2024), https://www.dlapiper.com/en-us/insights/publications/ai-outlook/2024/an-ai-blueprint-for-future-congressional-action [https://perma.cc/6R57-28CG].
[31] Id.
[32] Artificial Intelligence 2024 Legislation, Nat’l Conference of State Legislators (Sep. 9, 2024), https://www.ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation [https://perma.cc/LR7Q-7UQ2].
[33] Patel, supra note 27.
[34] Id.; see Lecher, supra note 16.
[35] Patel, supra note 27.
[36] Maria Yagoda, Airline held liable for its chatbot giving passenger bad advice – what this means for travelers, BBC (Feb. 23, 2024), https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know [https://perma.cc/U2RN-MQQB].
[37] Id.
[38] Id.
[39] Id.
[40] Id.
[41] See Payne, supra note 24.
[42] Matthew Bergman, Character.AI Lawsuits, Social Media Victims Law Center (Mar. 4, 2025), https://socialmediavictims.org/character-ai-lawsuits/ [https://perma.cc/6LBB-YWCM].
[43] Id.
[44] Id.
[45] Id.
[46] Id.
[47] Id.
[48] The Next Chapter: Character.AI Roadmap, character.ai (Nov. 13, 2024), https://blog.character.ai/roadmap/ [https://perma.cc/X89D-G68F].
[49] Id.
[50] Id.
Leave a comment