From Clinics to Chatbots: How Congress Can Protect Sensitive Information on AI Sites

Tom Voet, Associate Member 2025-2026

Intellectual Property and Computer Law Journal

I. Introduction

Congress should extend privacy protections to artificial intelligence (AI) companies in response to the growing use of AI chatbots as a mental health resource in an increasingly online population, using the structure of HIPAA laws as a model. These novel uses of online resources present several legal issues, like those surrounding data privacy, that should stand as clear priorities for our legislature. Part II provides background on the heightened presence of AI chatbots in mental health discussions, as well as information regarding how the United States legislature has previously handled the protection of sensitive data, particularly in patient-doctor confidential disclosures. Part III discusses how these data privacy laws could prove useful as a model in future legislation regulating sensitive data being disclosed to AI chatbots.

II. Background

Mental Health and AI

The United States is suffering a significant mental health crisis. In 2021, over 20% of the adult population and over 15% of the youth population experienced some kind of mental illness.[1] However, only 47% of these individuals actually received treatment for their conditions, indicating that many of those suffering lack adequate support.[2] One contributor to this problem is the current shortage of mental health professionals in America, which has created an unfavorable environment for many people in need of treatment.[3] A recent study by the Health Resources and Services Administration illustrates that nearly a third of the population lives in a designated Mental Health Professional Shortage Area[4]—and this number is only projected to rise.[5] Those without clear access to professional assistance are often left to rely on their own inner circles for support, but this is an unreliable and sometimes insufficient substitute for professional treatment. For those without any support network, the necessary help can seem impossible to find.

The inability to access mental health resources, paired with the onset of AI, has caused many of those suffering from mental illness to turn to AI chatbots to vent frustrations and worries.[6] These chatbots can act as a set of “open ears” for those struggling with mental health problems, even mimicking the behavior of a therapist and providing advice. Although some have voiced concern over the efficacy of “AI therapy”, particularly regarding the impersonal nature of AI chatbot use, studies have shown these practices can produce meaningful results(though not as effective as traditional therapeutic services).[7] Still, the inexpensive and readily accessible nature of online resources make them a logical choice for many.

AI therapy has emerged as a workable solution to the ongoing mental health crisis in America. However, there are ethical concerns with this use of AI that have yet to be addressed in legislation, specifically regarding data collection. Several mainstream AI service companies, such as OpenAI, operate under broad privacy policies which allow them the ability to store a wide range of user data, including much of the information that could be disclosed during AI therapy “sessions.”[8] Since the dawn of the internet, data collection has been a deeply contentious topic. Given the often personal nature of AI conversations, much more sensitive information is being disclosed now than ever before. Alongside general health information, users are uploading deeply personal anxieties and mental health conditions to AI sites. If the United States legislature intends on containing the rapid expansion of AI, a strong place to start would be user privacy within these spaces. To structure these limitations, the legislature can look at preexisting privacy laws as a starting point.

Data Privacy and the Legislature

The commodification of personal data has become a hallmark of modern life, where our personal information is stored, bought, and sold every day to entities hoping to use our information to turn profits.[9] In response, Congress has passed several pieces of legislation aimed at limiting some of the negative implications of this data marketplace. Congress enacted the Health Insurance Portability and Accountability Act (HIPAA) in 1996 to ensure confidentiality of disclosures between patients and health providers, which directed authority to the Department of Health and Human Services (HHS) to create a regulatory framework designed to protect confidential information in dealing with healthcare providers.[10] Pursuant to this authority, the HHS issued a Privacy Rule, which protects sensitive patient information from being disclosed by healthcare providers.[11] Sensitive information under the Privacy Rule includes any “individually identifiable health information,” including the physical or mental wellbeing of the patient.[12] HIPAA applies to specified “covered entities,” including healthcare providers, clearinghouses, plans, and other related business associates.[13] A major goal of the legislature in passing HIPAA was to strike a balance between permitting important uses of information, while still upholding the privacy of people who seek care.[14] HIPAA has created an environment of confidentiality within medicine, allowing doctor-patient relationships to truly be private.

Although HIPAA was extremely influential and provided necessary change to protect individual privacy, it did not adequately account for the realities of an increasingly online world. In response to a modernizing world, Congress passed the 2009 Health Information Technology for Economic and Clinical Health Act (HITECH Act),[15] which incentivized Electronic Health Data preservation and extended direct liability to business associates, closing a loophole of HIPAA that allowed business associates to escape liability.[16] Both HIPAA and The HITECH Act illustrate Congress’ willingness to both uphold and preserve sensitive consumer data in the face of technological advances.

III. Discussion

AI regulation remains fragmented and uncertain. Although AI is becoming a staple of daily life, it is unclear what the best steps may be to adequately contain it. If legislators are going to begin any regulation over AI use, it may be beneficial to start by applying standards Congress has already promoted in prior legislation. HIPAA and the HITECH Act can play an instructive role and serve as a model for Congress in forming a regulatory framework concerning data privacy on AI chatbots. Specifically, Congress and a designated federal agency can rely on the definitions within HIPAA regarding protected health information, as well as the “minimum necessary” rule, which establishes a standard for collectible data.[17] This could create a statutory scheme that treats AI companies as covered entities under HIPAA, which would require them to protect sensitive information and subject them to liability if they fail to do so.[18]

Minimum Necessary Rule

The HIPAA Privacy Rule includes an important limitation on data handling known as the “minimum necessary rule”.[19] This states that a covered entity must make “reasonable efforts” to use, disclose, and retain the minimum amount of protected health information (PHI) necessary to carry out operations.[20] This establishes a strict but fair standard for covered entities to protect disclosed PHI, allowing covered entities some grounds to store consumer information, but imposing strict boundaries on when they must stop. The minimum necessary standard concedes that some retainment of PHI is necessary,[21] but that a covered entity must at least make reasonable distinctions of where that line is.[22]

Adapting this model to AI platforms would provide Congress with a clear and workable framework. Once the legislature identifies the categories of health information it wishes to protect, it could delegate authority to an agency, such as the Federal Trade Commission (FTC), to establish a comparable standard to the minimum necessary rule for AI companies. This framework would mirror that of HIPAA, where Congress delegated authority to HHS. Such a statutory scheme could require AI companies to retain personally identifiable information of consumers only to the extent necessary to operate their platforms.

Though AI companies may need to store some user information to ensure proper functionality, the current landscape lacks any meaningful constraint. A minimum-necessary framework would impose a sensible limit, requiring AI companies to collect only what is reasonably required, to store it securely, and to retain it only as long as needed to support the service. This approach would both align with precedent and fill a significant gap in existing legislation.

Protected Health Information

HIPAA regulations prioritize robust confidentiality in the context of patient/doctor relationships. Under HIPAA, protected health information (PHI) encompasses any “individually identifiable health information” transmitted or maintained by a covered entity.[23] Beyond basic demographic information, PHI includes eighteen specific identifiers,[24] such as photographs, biometric identifiers, social security numbers, email addresses, and other data points that could reasonably be used to identify an individual. This expansive definition places significant responsibility on covered entities and forms the backbone of the strong privacy norms that characterize modern medical practice.

When charting a regulatory framework for AI companies, Congress would need to adopt a similarly comprehensive approach. As AI systems become more advanced, users are entering more personal information (including mental health concerns, descriptions of symptoms, and even photographs) to AI platforms.[25] As a result, the scope and type of information held by AI companies resembles the same information included under HIPAA’s eighteen identifiers.[26] As such, Congress could again look to the standard of confidentiality enumerated under HIPAA in determining what categories of information should receive heightened protection in AI chatbot use.

If Congress aims to replicate the privacy safeguards already embedded in health law, then importing HIPAA’s PHI framework into AI regulation presents a consistent, precedented path forward. Establishing a clear definition of protected information, parallel to HIPAA’s eighteen identifiers, would help ensure that AI companies handle increasingly intimate user disclosures with an appropriate degree of confidentiality.

IV. Conclusion

HIPAA’s definitions of Protected Health Information, alongside the “minimum necessary” rule, can provide a model for Congress in drafting new legislation regarding data privacy on AI platforms. Though AI presents no shortage of novel issues, it may be best for the legislature to address familiar topics, such as data security, when beginning to draft regulations. Though data privacy on the internet has been and remains a polarizing topic, the increased personal use of AI platforms will force the legislature to reexamine some of its positions on data security in an increasingly online world. Data privacy laws, and their progeny, such as HIPAA and HITECH, illustrate the prioritization of certain information by Congress, and the modernization of these principles through time.  If Congress aims to continue prioritizing data privacy, they will need to address the growing concerns presented by AI.


[1] Mental Health By the Numbers, National Alliance on Mental Illness (2025), https://www.nami.org/about-mental-illness/mental-health-by-the-numbers/ [https://perma.cc/SF6R-22JX].

[2] Id.

[3] State of the Behavioral Health Workforce, 2024, Health Resources and Services Administration (Nov. 2024), https://bhw.hrsa.gov/sites/default/files/bureau-health-workforce/state-of-the-behavioral-health-workforce-report-2024.pdf [https://perma.cc/SF6R-22JX].

[4] Id. at 3.

[5] Id. at 4.

[6] See Windsor Johnston, With therapy hard to get, people lean on AI for mental health. What are the risks?, NPR (Sep. 30, 2025), https://www.npr.org/sections/shots-health-news/2025/09/30/nx-s1-5557278/ai-artificial-intelligence-mental-health-therapy-chatgpt-openai [https://perma.cc/MXG7-TDD3].

[7] See Liana Spytska, The use of artificial intelligence in psychotherapy: development of intelligent therapeutic systems, PubMed Central (Feb. 28, 2025), https://pubmed.ncbi.nlm.nih.gov/40022267/ [https://perma.cc/WL38-ABZE].

[8] OpenAI, Privacy Policy (June 27, 2025), https://openai.com/policies/privacy-policy/ [https://perma.cc/QL73-NWAK].

[9] See Gene Petrino, The Data Big Tech Companies Have On You, security.org (Oct. 10, 2025), https://www.security.org/resources/data-tech-companies-have/ [https://perma.cc/98VV-Z5Y9].

[10] Health Insurance Portability and Accountability Act, 42 U.S.C. § 1320d (1996).

[11] See 45 C.F.R. § 160.103.

[12] Id.

[13] Id.

[14] Spytska, supra note 7.

[15] See 42 U.S.C. § 17934 (2009).

[16] Id.

[17] See 45 C.F.R. § 164.502(b).

[18] See 42 U.S.C. § 1320d-6(b).

[19] See 45 C.F.R. § 164.502(b).

[20] Id.

[21] Id.

[22] Id.

[23] See 45 C.F.R. § 160.103.

[24] See 45 C.F.R. § 164.514(b)(2)(i).

[25] Johnston, supra note 6.

[26] Jennifer King et al., User Privacy and Large Language Models: An Analysis of Frontier Developers Privacy Policies, Proc. AAAI/ACM Conf. AI, Ethics & Soc’y, vol. 8, no. 2. (2025).

Leave a comment

Blog at WordPress.com.

Up ↑