Dr. AI; How AI is Affecting the Medical Industry

Sarah Saadeh, Contributing Member 2024-2025

Intellectual Property and Computer Law Journal

I. Introduction

Everyone wants a doctor who has a good education and professional experience. If a loved one is sick, a person would usually prefer a doctor with the most experience to treat them. But recently, a new, young physician has been shaking up the medical profession, and they did not even go to medical school. Artificial intelligence (AI) has infiltrated the hospitals and for patients still seeing actual doctors, AI is able to do a lot of work behind the scenes.

While doctors owe patients a duty of care, AI owes no duty to anyone. This article will discuss how AI can and is currently being used in the medical field. It will then explain physicians’ responsibilities to patients and how AI is not affecting their duty to their patients through relevant cases and laws. It concludes that further regulation is needed around AI to protect not only patients but physicians as well.

II. Background

Currently, AI has been used in many different applications in the healthcare industry, from diagnosing patients, transcribing medical documents, drug discovery, and development to administrative tasks.[1] AI even gets better results than humans at times. Specifically in cases of diagnosing, AI produces high accuracy in breast cancer detection using mammograms.[2] Although humans have an eighty percent accuracy reading of mammograms, AI has a ninety percent accuracy.[3] Therefore, when radiologists use AI as a tool, they have been able to increase their accuracy.[4]

So, if a Physician misdiagnoses based on AI, that would be a tort claim where the patient could potentially sue the physician and the AI company.[5] A tort is an action “that gives rise to injury or harm to another and amounts to a civil wrong for which courts impose liability.”[6] The lack of governmental guidance makes it a risk for physicians looking to bring in the new technology.[7]

Some of the healthcare administrative tasks AI can perform are taken advantage of in the insurance industry.[8] AI is able to use patients’ information to determine whether they have an insurance claim and even help the insurance company determine the amount of coverage a patient will receive.[9]

AI has even passed the U.S. Medical Board exam and the U.S. Medical Licensing Examination.[10] It is clear that AI is here to stay and, as such, education for how physicians can now utilize AI is something some medical schools are beginning to incorporate into their curriculums.[11]  

Doctors have a fiduciary law duty of care to their patients; state tort laws determine the malpractice cause of action that a tort victim has.[12] Some of these duties include confidentiality, non-abandonment, standard of care, and informed consent.[13] Now, the question is, how does AI play a role in these duties? The American Medical Association President Jesse Ehrenfeld said, “We’re seeing lawsuits already,” which is causing stress for physicians who would prefer more guidance.[14] Since Congress has been silent on what liability limits, as it relates to AI, it will be up to the courts to determine the standard of care from existing laws.[15]

Part of the duty of care includes keeping up with technological advances and a doctor can be held liable for patient harm from outdated care.[16] A doctor cannot just be held responsible for keeping up with everything that comes out by the following day, so the standard will typically be enforced that a doctor keeps up when the greater majority of doctors do or the medical board releases guidelines.[17]    

III. Discussion

President Biden’s Executive Order 13960 on Safe, Secure, and Trustworthy Artificial Intelligence (EO 13960) specifically calls out the healthcare industry to acknowledge that AI, if misused, can bring harm and discrimination to patients.[18] In the Executive Order, President Biden directed the Department of Health and Human Services (HHS) to “establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI.”[19]

This task has been taken on by HHS’s Office of the Chief Artificial Intelligence Officer (OCAIO).[20] OCAIO has released the Trustworthy AI (TAI) Playbook which covers how trustworthy AI protects from four main risks: strategy and reputation, cyber and privacy, legal and regulatory, and operations.[21] Some of the statutory authorities that give HHS regulation authority on AI are Section 1557 of the Patient Protection and Affordable Care Act- which protects against unlawful discrimination. The Public Health Service Act gives the authority to regulate “the electronic exchange and use of health information,” which includes information entered into AI.

The Health Information Technology for Economic and Clinical Health (HITECH) Act- that “indirectly authorizes HHS to regulate AI applications by establishing requirements for the safeguarding and notification of a breach of protected health information which may occur through use of an AI application by a HIPAA regulated entity.”[22] HIPAA-regulated entities include health plans, health care clearinghouses, and health care providers who electronically transmit any health information.[23]

On the privacy front, the guidance includes patients should have informed consent if their data is being used to train an AI.[24]  If a third party implements the AI tool, there should be limitations on the transfer of information, and if  “an AI solution that helps users identify their colon cancer risk factors, it is important that all Personally Identifiable Information (PII) (e.g., name, email, IP address) collection is minimized and stripped prior to use by the AI system.”[25] This last step is extremely important as AI is not necessarily able to unlearn information, leaving behind sensitive patient information in the system.[26]

OCAIO has also released AI use cases that document incidents of AI use in line with EO 13960.[27] Most of the applications are on the administrative side, like a “[b]ot pulls HR data related to staffing changes” to a “Chatbot (voice) [that] is an automated phone response for general badging questions”[28] A takeaway from this is that AI in administrative tasks in the healthcare field are generally acceptable.

On the insurer side, there have already been class actions from insurance companies using AI to wrongly deny patients coverage for their claims that would have been otherwise covered.[29] The claims allege that United Healthcare, the insurer, improperly used AI “to make erroneous health care determinations generated by the algorithm” and further accused United Healthcare “of using the AI tool to override the determinations of medical professionals, including ones employed by the insurer.”[30] The litigation is still ongoing, and United Healthcare is denying the claims, but this is not the only action happening against insurers and more are likely to come about as AI is utilized.[31]

Cigna, another insurer, has a class action lodged against them for using an AI that automatically denied claims that didn’t review a patient’s history. Under Connecticut law, doctors must “examine patient records, review coverage policies, and use their expertise to decide whether to approve or deny claims to avoid unfair denials.”[32] Utilizing the AI skipped over the Connecticut Provision completely, which would be a breach of the implied covenant of good faith and fair dealing.[33]

To combat the rise of insurance companies breaching their duty to those insured the National Association of Insurance Commissioners (NAIC) has released the Use of Artificial Intelligence Systems in Insurance model bulletin.[34] It provides an expectation of how insurance companies must use AI in an ethical and transparent way and take steps to not discriminate against patients.[35]

As of October 2024, NAIC’s model has been adopted by seventeen states and four other states have put out their own insurance-specific regulations.[36] States that have adopted such regulations include: Alaska, Arkansas, Connecticut, District of Columbia, Illinois, Kentucky, Maryland, Michigan, Nebraska, Nevada, New Hampshire, Pennsylvania, Rhode Island, Vermont, Virginia, Washington, and West Virginia. States with their own polices include: California, Colorado, New York, and other states are likely to follow suit especially as more litigation arises.[37]

While there is still little guidance on AI, there are going to be many more plaintiff class actions likely to come about using existing laws and based on a breach of the duty of care. More agencies need to provide guidance to give doctors the confidence they need to focus on treating patients and lean into using AI tools to help them when necessary, instead of worrying about getting sued. Regulation is also needed to hold insurance companies accountable to allow patients to receive the coverage for the care that they need. It is dangerous to allow the insurance industry to fall into a standard of using AI to deny patients.

Patients need to be made aware of when AI is being used in their care process and if their data is being entered into AI so they can properly give their informed consent. Just like how a doctor would walk a patient through the steps of their treatment, it is not a large burden on them to inform patients of when AI is in play.

IV. Conclusion

Based off of what the HHS has released as appropriate uses of AI and to maintain a duty of care to patients, doctors and insurance companies should utilize and regulation should be adopted on the Human-in-the-Loop (HITL) practice.[38] HITL “refers to a collaborative approach where human expertise is integrated into the decision-making loop of AI systems” and in practice in a healthcare setting it would application would utilize AI “in processing vast amounts of data quickly and recognizing patterns, while human clinicians bring critical thinking, empathy, and nuanced understanding to the table.”[39]

AI and new innovations are moving too quickly for regulations to keep up with right now. Having a baseline HITL framework ensures that as AI advances and gains even more capacities patients are not being forgotten or discriminated against by the algorithms. If a computer makes a mistake, a human should be there to be able to look it over, especially in health care where the set of real eyes on the issue could save someone their life.


[1]Revolutionizing Healthcare: How AI is Transforming the Health Care Industry, L.A. Pac. Univ. (Dec. 21, 2023), https://www.lapu.edu/ai-health-care-industry/ [https://perma.cc/RUN2-E7QM].

[2] NYU Langone Health, Can Artificial Intelligence Perfect Mammography?, NYU Langone News (2021), https://nyulangone.org/news/can-artificial-intelligence-perfect-mammography [https://perma.cc/SCC7-FVMB].

[3] Id.

[4] Id.

[5] Dave Fornell, Video: Who Gets Sued When Radiology AI Fails?, Radiology Bus. (Feb. 2, 2023), https://radiologybusiness.com/topics/artificial-intelligence/video-who-gets-sued-when-radiology-ai-fails [https://perma.cc/E4FK-XY2H].

[6] Tort, Legal Info. Inst., (Nov. 22, 2024), https://www.law.cornell.edu/wex/tort#:~:text=A%20tort%20is%20an%20act,detriment%20that%20an%20individual%20suffers. [https://perma.cc/5K3N-378M].

[7] Fornell, supra at 5.

[8] Dennis Sebastian, Artificial Intelligence and Health Insurance, RGA Knowledge Center, (Sep. 2021), https://www.rgare.com/knowledge-center/article/a.i.-and-health-insurance [https://perma.cc/P8U3-MQ4L].

[9] Id.

[10] Michael DePeau-Wilson, AI Passes U.S. Medical Licensing Exam, MedPage Today (Jan. 19, 2023), https://www.medpagetoday.com/special-reports/exclusives/102705 [https://perma.cc/8JKG-9L7M].

[11] Marc Zarefsky, How AI is Being Incorporated into Medical School, Am. Med. Ass’n. (Sept. 5, 2024), https://www.ama-assn.org/practice-management/digital/how-ai-being-incorporated-medical-school [https://perma.cc/4XD8-RJHB].

[12] Hanan Zaki, What Is a Doctor’s Duty of Care?, FindLaw (Sept. 29, 2023), https://www.findlaw.com/injury/medical-malpractice/what-is-actionable-medical-malpractice.html [https://perma.cc/D7A9-QVTL].

[13] Cara E Davies & Randi Zlotnik Shaul, Physicians’ Legal Duty of Care and Legal Right to Refuse to Work During a Pandemic, PMC (2010), https://pmc.ncbi.nlm.nih.gov/articles/PMC2817323/ [https://perma.cc/LQG6-HEJ9].

[14] Daniel Payne, Who pays when AI steers your doctor wrong?, Politico (Mar. 24, 2024),

https://www.politico.com/news/2024/03/24/who-pays-when-your-doctors-ai-goes-rogue-00148447 [https://perma.cc/V5U9-DQ6D].

[15] Id.

[16] The Duty To Keep Current With Developments In Medical Knowledge, Gilman Bedigian, LLC, (Nov. 28, 2024), https://www.gilmanbedigian.com/the-duty-to-keep-current-with-developments-in-medical-knowledge/ [https://perma.cc/D8EF-Z7Y6].

[17] Id.

[18] FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, White Hous. (Oct. 30, 2023), https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ [https://perma.cc/5WL6-JYDN].

[19] Id.

[20] About the HHS Office of the Chief Artificial Intelligence Officer (OCAIO), U.S. Dep’t of Health & Hum. Servs, https://www.hhs.gov/programs/topic-sites/ai/ocaio/index.html [https://perma.cc/42C3-8L6X]. (Nov. 18, 2021).

[21] HHS Trustworthy Artificial Intelligence (AI) Playbook, U.S. Dep’t of Health & Hum. Servs (Sept. 30 2021), https://www.hhs.gov/sites/default/files/hhs-trustworthy-ai-playbook.pdf [https://perma.cc/K58J-KHXQ].

[22] Id. at 98-101.

[23] To Whom Does the Privacy Rule Apply and Whom Will It Affect?, Nat’l Inst. of Health, https://privacyruleandresearch.nih.gov/pr_06.asp [https://perma.cc/55HL-7T3T] (Nov. 19, 2024).

[24] Id. at 22.

[25] Id. at 22.

[26] The Pecan Team, The Rise of Machine Unlearning, Pecan AI (June 25, 2024), https://www.pecan.ai/blog/the-rise-of-machine-unlearning/ [https://perma.cc/E3UX-KUEZ].

[27] Department of Health and Human Services: Artificial Intelligence Use Cases Inventory, U.S. Dep’t of Health & Hum. Servs (June 6, 2024),

https://www.hhs.gov/programs/topic-sites/ai/use-cases/index.html [https://perma.cc/6WRE-5ULE].

[28] Artificial Intelligence Use Cases – FY2022 , U.S. Dep’t of Health & Hum. Servs, https://www.hhs.gov/sites/default/files/hhs-ai-use-cases-inventory.pdf [https://perma.cc/9EFY-YJEF] (Nov. 18,1024).

[29] David S. Greenberg, Health Insurers Sued Over Use of Artificial Intelligence to Deny Medical Claims, ArentFox Schiff LLP (Dec. 22, 2023), https://www.afslaw.com/perspectives/health-care-counsel-blog/health-insurers-sued-over-use-artificial-intelligence-deny [https://perma.cc/8WZK-6GBE].

[30] Id.

[31] Id.

[32] Emily Cousins, Cigna Class Action: Algorithm Allegedly Auto-Denies 300,000 Claims, AML (Mar. 12, 2024),

 https://www.law.com/ctlawtribune/2024/03/12/cigna-class-action-algorithm-allegedly-auto-denies-300000-claims/?slreturn=20241118-20334 [https://perma.cc/L75G-YSLU].

[33] Id.

[34] NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers, Nat’l Assoc. of Ins. Comm’rs, (2023), https://content.naic.org/sites/default/files/inline-files/2023-12-4%20Model%20Bulletin_Adopted_0.pdf [https://perma.cc/MX6F-Z9E9].

[35] Id.

[36] Implementation of the NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers, Nat’l Assoc. of Ins. Comm’rs (Oct. 31, 2024),  https://content.naic.org/sites/default/files/cmte-h-big-data-artificial-intelligence-wg-ai-model-bulletin.pdf.pdf [https://perma.cc/67VU-534C].

[37] Id.

[38] The Synergy of Human-in-the-Loop and Medical AI in Diagnosis and Treatment, Hums. in the Loop, https://humansintheloop.org/the-synergy-of-human-in-the-loop-and-medical-ai-in-diagnosis-and-treatment/ [https://perma.cc/6X55-GHE8]. (Nov. 18, 2024).

[39] Id.

Leave a comment

Blog at WordPress.com.

Up ↑