AI Discrimination: How Data Security Law Might Be the Solution

Sarah Saadeh, Contributing Member 2024-2025

Intellectual Property and Computer Law Journal

I. Introduction

Have you ever thought it was weird the ads on your phone were so specific they even considered your race, gender identity or ethnicity? Artificial intelligence (AI), just like humans, can be biased. As a result, AI could actually be targeting you with ads based off your identity.

This article will discuss AI bias and how it is being used and regulated. Part II will follow how AI has advanced, uses for it, and the dangers of AI bias. Part III will cover how data privacy law can be used to combat AI bias and a comparison of what the EU is doing. Part IV concludes by advocating for more federal regulation protections against AI bias.

II. Background

How AI Works

    In simple terms, AI is technology that allows computers to “think” like humans do.[1] For traditional AI, a programmer provides the AI with a frame or calculation and the AI responds with specific answers.[2] Today, society uses generative AI, which can form complex original content.[3] For example, instead of simply answering a question, generative AI can write an entire brief or even make a video from synthesizing a large amount of information.[4]

    Generative AI functions in three steps: (1) training, (2) tuning, (3) generation, evaluation and more tuning.[5] Programmers must input a large amount of data to train AI to be accurate.[6] The training step is critical as it builds the basis from which the AI will learn.[7] During the tuning step, the programmer sets the AI’s tasks to hone its accuracy.[8] Then, during the generation step, the AI produces output, which it then re-inputs into its system, creating a feedback loop and improving its knowledge.[9] If the AI is trained with misleading or false data, the AI will generate wrong outputs.[10]

    Generative AI has become commonplace in society, especially in the work force. For example, employers use AI to screen applicants’ resumes, which saves time for hiring managers to sort through applicants.[11] In the healthcare sector, AI has produced high accuracy in breast cancer detection using mammograms.[12]  In the business world, AI has improved compliance tools such as AI whistleblower detentions to regulatory change management.[13]

    However, there is also a more nefarious side to AI use. There is a history of racial bias in technology, when considering how technology affects Black and Brown bodies compared to White bodies.[14] A famous example is how a soap dispenser could not be triggered by a darker skin color, discriminating against people of color by not allowing them to use soap.[15] This unfortunately can follow in AI as well. If the AI is taught in a biased way, then it will be biased in its outputs.[16] When two different AIs were asked to produce an image of a “beautiful woman,” Midjourney AI output ten images, nine of which were a fair skinned woman, and Stable Diffusion AI only had eighteen percent of outputs be woman of a darker skin tone.[17] More biased, ninety-eight percent of Midjourney’s output for “normal women,” was women with a pale complexion.[18]

    When AI was trained by information from Reddit, which is known for being full of trolling language, researchers described it as the “world’s first psychopath AI.”[19] For example, when the Reddit AI was asked to look at inkblots and describe what was depicted it said the image was a dead man but an AI that was not trained off Reddit described the inkblot as a vase with flowers.[20]

    AI Bias

    AI can increase biases and robotize unlawful discrimination in several contexts.[21] For example, when Amazon created a computer program to review job applicants’ resumes in search of the most talented candidates, the program taught itself using previous employment data that male candidates were preferable to female candidates.[22] This happened because Amazon trained the program to sort applicants by observing patterns in resumes over a ten-year period.[23] Most of the resumes fed to the program came from men and reflected male dominance in the technology field during that time.[24] Therefore, Amazon’s candidate sorting program “learned” that men are the more “ideal” candidates.[25] Although Amazon caught this unlawful discrimination against female candidates, there is no guarantee that something like this wouldn’t happen again, and the system would not discriminate in another way.[26] Commenting on this AI failure by Amazon, the American Civil Liberties Union stated[27]

    In 2023, the Equal Employment Opportunity Commission (EEOC) addressed this problem by releasing guidance on how an employer’s use of machine or AI driven processes might violate Title VII of the Civil Rights Act of 1964.[28] Under the EEOC guidance, AI programs that make or advise decisions about hiring will be treated as a “selection procedure” subject to Title VII’s ban on discrimination against certain protected classes.[29] The guidance does not have the force of law and is not binding.[30] After the EEOC’s attempt to warn about AI’s potential unlawful discrimination in employment matters, plaintiffs have challenged AI systems under Title VII.[31] In a 2024 case, Mobley v. Workday, a job searcher sued Workday, a job platform, for employment discrimination.[32] The applicant alleged that Workday’s applicant screening tool was discriminatory considering disability, race and age.[33] Even with the EEOC’s guidance, AI bias still permeates through employment matters.

    Yet employment is not the only area where AI bias is experienced. Housing, advertising, and medicine are all affected by AI’s bias.[34]  For example, a study out of Berkely showed that Black and Latinx people were charged higher loan rates than White people due to an AI mortgage system and AI used in radiology has been shown to be using shortcuts based on peoples race which has led to inaccuracies.[35] Although many attempts have been made to fix this situation, none are comprehensive as they must be to stop AI bias and discrimination.

    III. Discussion

      In 2023, President Biden issued Executive Order (EO) 14110, titled “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which hoped to establish a coordinated approach to promote responsible AI development and use.[36] In the EO, President Biden promised that the government will take proper means to safeguard consumers against possible harms from AI including fraud, breaches of privacy, and bias via consumer protection laws.[37] Specifically, President Biden noted that AI has made it easier for bad actors to use sensitive information to for their desired purposes.[38] Having more robust data privacy laws that stop AI from harvesting personal data can help solve AI bias and discrimination.[39]

      Current U.S. Data Privacy Law That Could Combat AI Bias

        Although the United States lacks uniform federal laws on how companies employ personal data for AI use, it has several data privacy laws that could mitigate the AI bias until a uniform law is enacted.[40] For example, the Obama Administration put forth the Consumer Privacy Bill of Rights in 2012.[41] Though nonbinding, the Consumer Privacy Bill of Rights defines an important basis for future data privacy legislation.[42] It states that consumers have the right to control the personal data that is collected from them from companies and the uses of their data.[43]Additionally, it provides that consumers have a right to anticipate that companies will use and collect data in a way that aligns with how consumers provided their data.[44] The Obama Administration hoped that this framework would ensure that privacy rules keep pace with the rapid growth and innovation of technology.[45]

        Similarly, in 2022, the Biden Administration released a blueprint for an AI Bill of Rights, which although nonbinding, outlines five principles to govern AI development and use.[46] One of the principles is data privacy.[47] It states that every person should be safe from adverse data uses and practices by the implementation of built-in safeguards, and every person should have agency over how their data is [48]  The blueprint demands additional protections for “sensitive domains” where AI discrimination is more likely, such as health, employment, and personal finance.[49] Alas, the blueprint acknowledges that federal law has not kept pace to address the added collection of data and calls for additional protections that would guarantee Americans that AI[50]

        Congress tried to answer the blueprint’s call. Congressman Frank Pallone introduced the American Data Privacy and Protection Act in the 2021-2022 congressional cycle.[51] This bill would be the uniform law needed in data privacy by giving consumers a starting point of data privacy rights and mechanisms for establishing oversight and enforcement.[52] However, in Summer 2024, the House Energy and Commerce Committee canceled a scheduled markup of the bill after eliminating provisions to prevent data-driven discrimination and allowing individuals to opt out of AI-enabled decisions by private companies.[53]

        Comparison to European Union Data Privacy and Protection Laws

        It is in the United States’s best interest to adopt a broader federal privacy policy and AI policies so when technology evolves, the policies will follow. Right now, the United States is in a game of catch up. The United States should look to EU law on data governance, including the General Data Protection Regulation (GDPR), Digital Services Act (DSA), and Artificial Intelligence Act (AI Act) for guidance.[54] 

        The GDPR sets standards for transparency with use of consumer data and lists consumer protections.  The GDPR requires that consent is needed for the usage of someone’s personal data in training AI models.[55] If the United States had a similar federal privacy policy, it could protect consumers data that is collected by and fed to AI as well.

        The DSA sets requirements for online platforms and intermediaries including transparency in disclosing algorithms and prohibiting targeted advertising in relation to sensitive data like ethnicity.[56] This is important in the AI regulation sphere in relation to AI making targeted ads, for example when Facebook’s AI gave targeted ads which discriminated on gender and race, that would be a breach.[57] But since the United States does not have a similar federal regulation the means of action taken was via civil rights claim by the United States Department of Housing and Urban Development[58]

        Notably to this discussion – the AI Act, which is the first comprehensive AI law in the world. It regulates AI by sorting them into different risk bases, unacceptable and high risks. The unacceptable category specially targets AI that would be biased against race by banning classification of people on the basis of socio-economic standing, personal characteristics, or vulnerable groups or and biometric data identification, including facial recognition.[59] These safeguards help ensure that AI is more ethical and limit discrimination on the basis of race and other sensitive classes.

        Something else that should be kept in mind is the movement of data. Since AI is trained on data and companies want to move it around to different countries that means they need to follow new jurisdictions’ rules. Questions such as where did the data come from, who has the rights to transfer the data, and if there is consent now come into play. Since the United States does not have similar regulations to the EU, American companies might have to start following the EU standard if they want to do business abroad. Having some baseline standards in the United States would set companies on a better playing field when trying to do business overseas.

        California has already taken a more EU approach, with its passing of the California AI Transparency Act.[60] Colorado, Utah and Illinois also have AI transparency requirements, but California is the first comprehensive law that sets a framework to follow.[61] The Act requires the creation of AI detection tools and mandates disclosures.[62] This in combination with their existing consumer data privacy protections through the California Consumer Privacy Act better protects consumers against AI biases.[63]

        IV. Conclusion

        What companies must remain cognizant of is that when putting information into an AI platform the information is not always secure and private, as AI gets smarter by learning off the information previously imputed into it.[64] While it can be more secure if a company uses an internal AI, rather than the whole internet learning from the AI, the information could theoretically remain internal but that would be more expensive.[65] It could still potentially lead to privacy concerns if a customer requests their information be wiped since the AI that the company uses would have already stored the customer’s information and be using/learning from it when making decisions for other customers.[66] Ultimately, these are some of the considerations companies should think about before deciding if an AI tool is worth the risk or a good fit for them.

        There must be federal regulations put in place not only to help protect against AI bias but also to help companies understand the guidelines they should be working under. Notably before they get too far invested in AI tools that are not necessarily using the best ethical practices and discriminating against consumers, potential employees, and citizens without having the intention to do so. AI is the future and here to stay. Instead of fearing it, AI should be leveraged to the betterment of society, but to do this requires more safeguards that can keep up with AI as it advances, unlike existing United States law.


        [1] Cole Stryker & Eda Kavlakoglu, What is Artificial Intelligence (AI)?, IBM (Aug. 16, 2024), https://www.ibm.com/topics/artificial-intelligence [https://perma.cc/8AVP-BC4U].

        [2] Id.

        [3] Id.

        [4] Id.

        [5] Id.

        [6] Id.

        [7] Id.

        [8] Id.

        [9] Id.

        [10] Jay Calavas, What Happens When You Fuel AI With Bad Data?, Tealium (Feb. 8, 2024), https://tealium.com/blog/data-strategy/what-happens-when-you-fuel-ai-with-bad-data/#:~:text=Diminished%20Accuracy%20and%20Reliability,outputs%20become%20unreliable%20and%20unusable [https://perma.cc/DEP3-94KU].

        [11] Jamie Birt, How To Optimize Your Resume for AI Scanners (With Tips), Indeed (July 30, 2024), https://www.indeed.com/career-advice/resumes-cover-letters/resume-ai [https://perma.cc/CP2U-QZSR].

        [12] Can Artificial Intelligence Perfect Mammography?, NYU Langone Health: Perlmutter Cancer Ctr. Magazine (2021), https://nyulangone.org/news/can-artificial-intelligence-perfect-mammography [https://perma.cc/ZT97-5H49].

        [13] Rebecca Kappel, The Top 7 AI Compliance Tools of 2024, Central Eyes (Aug. 5, 2024), https://www.centraleyes.com/top-ai-compliance-tools/ [https://perma.cc/3D64-CGMP].

        [14] Taylor Synclair Goethe, Bigotry Encoded: Racial Bias in Technology, Reporter (Mar. 2, 2019), https://reporter.rit.edu/tech/bigotry-encoded-racial-bias-technology [https://perma.cc/6SSS-H3YK].

        [15] Id.

        [16] Bias in AI, Chapman Univ., https://www.chapman.edu/ai/bias-in-ai.aspx#:~:text=It%20is%20important%20to%20recognize,is%20not%20diverse%20or%20representative [https://perma.cc/Z34M-FHWS].

        [17] Nitasha Tiku & Szu Yu Chen, What AI Thinks A Beautiful Woman Looks Like, Washington Post (May 31, 2024), https://www.washingtonpost.com/technology/interactive/2024/ai-bias-beautiful-women-ugly-images/ [https://perma.cc/V9C6-BSY7].

        [18] Id.

        [19] Brian Heater, Bad things happen when you train AI using ‘the darkest corners of Reddit’, Tech Crunch (June 7, 2018, 12:19 PM), https://techcrunch.com/2018/06/07/bad-things-happen-when-you-train-ai-using-the-darkest-corners-of-reddit/ [https://perma.cc/LCP6-5V72].

        [20] Id.

        [21] Artificial Intelligence and Equal Employment Opportunity for Federal Contractors, U.S. Dep’t of Labor, https://www.dol.gov/agencies/ofccp/ai/ai-eeo-guide [https://perma.cc/U5SH-FAXB]; see Artificial Intelligence and Civil Rights, U.S. Dep’t of Justice, https://www.justice.gov/crt/ai [https://perma.cc/UE5P-ZCR5].

        [22] Jeffery Dastin, Insight – Amazon scraps secret AI recruiting tool that showed bias against women, Reuters (Oct. 10, 2018), https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/ [https://perma.cc/8JAF-MRJQ].

        [23] Id.

        [24] Id.

        [25] Id.

        [26] Id.

        [27] Rachel Goodman, Why Amazon’s Automated Hiring Tool Discriminated Against Women, ACLU (Oct. 12, 2018), https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against [https://perma.cc/KLE7-KTDL].

        [28] Press Release, EEOC, EEOC Releases New Resource on Artificial Intelligence and Title VII (May 18, 2023), https://www.eeoc.gov/newsroom/eeoc-releases-new-resource-artificial-intelligence-and-title-vii [https://perma.cc/45MF-9D9B].

        [29] Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, EEOC (May 18, 2023), https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial [https://perma.cc/JDB5-YGZB].

        [30] Id.

        [31] Mobley v. Workday, No. 23-cv-00770-RFL, 2024 U.S. Dist. LEXIS 126336, at *1 (N.D. Cal. July 12, 2024).

        [32] Id.

        [33] Id.

        [34] Danya Sherbini, AI is making housing discrimination easier than ever before, U. Chi. Kreisman Initiative for Housing L. & Pol’y (Feb. 12, 2024), https://kreismaninitiative.uchicago.edu/2024/02/12/ai-is-making-housing-discrimination-easier-than-ever-before/#:~:text=Meanwhile%2C%20a%20research%20study%20from,compared%20to%20their%20white%20counterparts [https://perma.cc/2WP6-NVES]; Hal Conick, The ethics of targeting minorities with dark ads, Am. Marketing Ass’n. (Mar. 21, 2019), https://www.ama.org/marketing-news/the-ethics-of-targeting-minorities-with-dark-ads/ [https://perma.cc/XM3F-UC4D]; Anne Trafton, Study reveals why AI model that analyze medical images can be biased, MIT News (June 28, 2024), https://news.mit.edu/2024/study-reveals-why-ai-analyzed-medical-images-can-be-biased-0628 [https://perma.cc/7PEH-E7K9].

        [35] Id.

        [36] Exec. Order No. 14,110, 3 C.F.R. § 75191 (2023).

        [37] Id.

        [38] Id.

        [39] Caitlin Chin-Rothmann, Protecting Data Privacy as a Baseline for Responsible AI, Ctr. for Strategic & Int’l Stud. (July 18, 2024), https://www.csis.org/analysis/protecting-data-privacy-baseline-responsible-ai [https://perma.cc/86ZX-98PC].

        [40] Id.

        [41] White House Press Release, Fact Sheet: Plan to Protect Privacy in the Internet Age by Adopting a Consumer Privacy Bill of Rights (Feb. 23, 2012), https://obamawhitehouse.archives.gov/the-press-office/2012/02/23/fact-sheet-plan-protect-privacy-internet-age-adopting-consumer-privacy-b [https://perma.cc/N6TD-BUUR].

        [42] Id.

        [43] Id.

        [44] Id.

        [45] Id.

        [46] Office of Science and Technology Policy, White House, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (Oct. 2022), https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf [https://perma.cc/6EDC-2X55].

        [47] Id.

        [48] Id.

        [49] Id.

        [50] Id.

        [51] American Data Privacy and Protection Act, H.R. 8152, 117th Cong. § 669 (2022) (The bill remains at the introduced level and has not been passed).

        [52] Id.

        [53] Chin-Rothmann, supra note 34; Press Release, Ctr. For Civ. Rts. & Technology News, Civil Society Organizations Urge Markup Delay for Privacy Bill, Restoration of Civil Rights Protections (June 25, 2024), https://civilrights.org/2024/06/25/civil-society-urge-markup-delay-privacy-bill-restoration-civil-rights-protections/ [https://perma.cc/PMJ8-M4MJ].

        [54] Id.

        [55] The Intersection of GDPR and AI and 6 Compliance Best Practices, Exabeam,https://www.exabeam.com/explainers/gdpr-compliance/the-intersection-of-gdpr-and-ai-and-6-compliance-best-practices/#:~:text=GDPR%20defines%20the%20requirement%20for,grounds%20of%20%E2%80%9Clegitimate%20interest%E2%80%9D [https://perma.cc/8T8F-WT8S].

        [56] The Impact of the Digital Services Act on Digital Platforms, E.U., https://digital-strategy.ec.europa.eu/en/policies/dsa-impact-platforms [https://perma.cc/QWB5-AXPL].

        [57] Karen Hao, Facebook’s ad-serving algorithm discriminates by gender and race, MIT Tech. Rev. (Apr. 5, 2019), https://www.technologyreview.com/2019/04/05/1175/facebook-algorithm-discriminates-ai-bias/ [https://perma.cc/TS4M-AN7U].

        [58] Id.

        [59] Press Release, European Parliament, EU AI Act: first regulation on artificial intelligence (June 18, 2024, 4:29 PM), https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence#ai-act-different-rules-for-different-risk-levels-0 [https://perma.cc/4MNG-VDZ5].

        [60] California AI Transparency Act, Cal. Bus & Prof Code § 22757.

        [61] Arsen Kourinian, Howard W. Waltzman, & Mickey Leibner, New California Law Will Require Ai Transparency and Disclosure Measures, Mayer Brown (Sept. 23, 2024), https://www.mayerbrown.com/en/insights/publications/2024/09/new-california-law-will-require-ai-transparency-and-disclosure-measures [https://perma.cc/NY4M-FKYS].

        [62] Id.

        [63] California Consumer Privacy Act (CCPA), Cali. Att’y Gen., https://oag.ca.gov/privacy/ccpa [https://perma.cc/J4CP-NEGV].

        [64]  Katharine Miller, Privacy in an AI Era: How Do We Protect Our Personal Information?, Stanford Univ.: Human-Centered Artificial Intel. (Mar. 18, 2024), https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information [https://perma.cc/GX4N-HNQ3].

        [65] AI pricing: how much does Artificial Intelligence cost?, Future Processing (Mar. 27, 2024), https://www.future-processing.com/blog/ai-pricing-is-ai-expensive/ [https://perma.cc/ADP5-K4PV].

        [66] Stephen Pastis, A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data, Fortune (Aug. 30, 2023, 12:43 PM), https://fortune.com/europe/2023/08/30/researchers-impossible-remove-private-user-data-delete-trained-ai-models/ [https://perma.cc/FY49-6PB8].

        Leave a comment

        Blog at WordPress.com.

        Up ↑