We Have to Start Somewhere: The European Union Takes a Step Toward Regulating Artificial Intelligence

You’ve probably heard the words ‘artificial intelligence’ more times in the past 12 months than you have in your entire lifetime. ChatGPT, a language chatbot developed by OpenAI, was released in November of last year and quickly surged in popularity.[1] This prompted many to consider the potential impacts, both positive and negative, that AI could have on our society as we begin to implement similar tools into our everyday lives.[2]

However, attempting to govern the field of artificial intelligence is no easy task. Regulatory bodies must make every effort to harness the massive power of AI systems while avoiding the authoritarian drawbacks of administrative scrutiny.[3] Maintaining safety should not stifle innovation, and vice versa. The European Union (EU) has had this issue on its radar for a few years now.[4] The European Commission, responsible for proposing and enforcing legislation in the EU, introduced the first regulatory framework for artificial intelligence tools in April 2021.[5] Dubbed the “AI Act,” the Commission’s goal was to classify AI systems based on the level of risk associated with using them, and apply lighter or more restrictive regulations accordingly.[6]

Proposed Risk Classifications for AI Systems in the EU

There are four levels of risk that an AI system can fall into under the EU’s proposed regulatory framework: unacceptable risk, high risk, limited risk, and minimal or no risk.[7] AI systems that are labeled as unacceptable risk are determined to pose an active threat to people and are entirely prohibited.[8] Examples of unacceptable AI risk include biometric identification programs in public spaces and voice-activated toys.[9] Any exceptions would be concretely defined and regulated, such as use of biometric identification in child abduction cases, or to prevent an imminent terrorist threat.[10]

High risk AI systems are systems that have the potential to negatively affect human safety or human rights.[11] Any system that falls into the high risk category would have to meet rigorous assessment standards before being released, and will be continually monitored after release.[12] This includes AI systems in products covered under the EU’s product safety legislation, such as planes, cars, or medical devices.[13] There are eight additional areas in which AI systems are considered high risk and would have to be registered in a database, like areas of critical infrastructure operation, law enforcement, and border control management.[14]

Limited risk AI systems would have to adhere to transparency obligations, allowing the user to make an independent and informed decision about whether to proceed.[15] An example of a limited risk AI system is one that can either generate or manipulate audiovisual content, like a platform specializing in deepfake content.[16]

Lastly, an AI system that poses little to no risk to individuals would be allowed to operate freely, which encompasses a majority of AI systems currently deployed in the EU.[17] The proposed regulatory framework also provides certain transparency stipulations for generative AI systems, like OpenAI’s ChatGPT program, to disclose AI-generated content, prevent programs from generating illegal content, and publish summaries of the copyrighted data used to train these programs.[18]

EU Begins Voting Discussions on the Proposed Legislation

Each group of the European Parliament submitted their amendments to the AI Act in June of 2022.[19] European Parliament consists of 705 Members, elected from the 27 Member States of the European Union.[20] These 705 Members are ‘grouped’ based on their respective political affiliations, instead of nationality, though not all Members belong to groups.[21] Before any vote, these political groups analyze relevant Parliamentary reports and offer amendments.[22]

The groups of Parliament wanted to ensure that any AI system deployed in the EU is “safe, transparent, traceable, non-discriminatory and environmentally friendly.”[23] Amendments to the framework also sought to establish a technologically-neutral definition of the term ‘artificial intelligence’ that would be applicable to both present and future AI programs.[24] The European Commission took things a step further in September 2022 when it introduced and adopted two proposals that help to facilitate civil liability claims for AI-related damages.[25] One amendment established a strict liability standard toward smart technology across the EU to ensure that victims of AI-related crimes could be fairly compensated.[26] The EU has made it clear that it intends to modernize consumer protections at the same speed that artificial intelligence systems are advancing.[27]

The European Parliament finalized its position on the AI Act this past June, which triggered voting discussions between the governing branches of the EU: the European Commission, the Council, and Parliament.[28] The three branches will attempt to reconcile their differing versions of the Act into a uniform piece of legislation, which could require multiple rounds of revision.[29] The AI Act will serve as a binding regulation upon all of its 27 Member States if implemented.[30] However, there would be a two-year implementation period between when it is passed and when it becomes applicable to AI providers, meaning that late 2025 is the earliest that European companies would see any sort of regulatory enforcement.[31]

Status of AI Regulations in the United States

Although other regulatory efforts for online platforms are coming into effect before late 2025, the rapid and determined approach of the European Union has left many wondering when similar laws might begin to appear elsewhere in the world, such as China or the United States.[32] Many believe that the US is falling behind the EU in terms of AI regulation or oversight.[33]

Factors that contribute to this falter include a general hesitancy from US lawmakers to declare any AI system as too dangerous, and uncertainty about which governmental agency would assume the role of regulation or if a new agency would need to be created.[34] There have been efforts from US lawmakers to draft or introduce bills that delineate certain categories where the use of AI would be highly risky, like elections, nuclear warfare, and facial recognition.[35] However, these bills have consistently failed to pass or have died during the legislative process.[36]

The Biden administration released a statement emphasizing its commitment and cooperation with the EU on matters related to AI and online platforms this past May.[37] Despite this supportive sentiment, a recent report unearthed a State Department analysis of the EU’s AI Act that was largely apprehensive of the proposed regulatory strategy.[38]

The State Department was primarily concerned that the EU regulations would hinder innovation and were generally too invasive.[39] Another area of concern was the likelihood of large companies being able to facilitate the costs of compliance with the new regulations, while smaller companies would likely be crushed trying to change course.[40] Thus, for the time being, it seems as though the US government has adopted a wait-and-see policy towards AI regulation. The disinterested “hands-off” approach of the federal government has prompted some to surmise that states will draft and implement legislation on their own accord, creating a “patchwork” of state regulations that companies will have to comply with simultaneously.[41]

President Biden issued an Executive Order targeting AI innovation and consumer protection this past October.[42] The Executive Order, among other actions, outlines new standards for AI safety and security, calls on Congress to pass legislation to better protect individual privacy, and mentions continued efforts to address algorithmic discrimination.[43] In the administration’s own words, algorithmic discrimination is when AI systems “contribute to unjustified treatment or impacts disfavoring people based on their race, color, ethnicity, sex . . . or any other classification protected by law.”[44] The administration also reaffirmed its commitment to global cooperation for the safe development and deployment of AI systems.[45] The administration listed the nations and conglomerates—including the EU—that they had engaged with to discuss AI regulatory frameworks.[46] The steps taken by the Biden administration will help guide the future of AI development both domestically and internationally.[47] However, it acknowledged that more action will be required in the future.[48]

Anticipated Impact of AI Regulations on US Companies

Elsewhere in the US, well-known tech giants have begun to brace for the impact that these regulations will have on their operations conducted in the EU, and any US regulations that might follow.[49] Many US tech companies like Meta or OpenAI champion the idea of regulation in the AI industry on the surface.[50] But where there is legislation, there is lobbying. Behind the scenes, many of these companies are reportedly working around the clock to weaken any attempts to implement actual regulations.[51] More than 100 tech executives drafted an open letter laying out their anxieties about the EU’s proposed regulatory framework this past June.[52] Most of these fears revolved around the obstruction of innovation and the implementation of penalties for breaches.[53] This has caused many to posit that big tech companies will simply conduct business as usual, and consider any fines or related penalties as “a cost of doing business in the EU.”[54]

The Biden administration’s recent Executive Order established new requirements and standards for companies that are developing high-risk AI systems.[55] These developers would be required to notify the government and submit safety test results on a regular basis.[56] The Executive Order also directly addressed issues of job displacement, competitive development in the marketplace, and data privacy.[57]

Despite some general uneasiness surrounding the current state of artificial intelligence, and the role it should or should not play in our society moving forward, many people are hopeful in the future of AI.[58] If regulated in a balanced and transparent manner, the possibilities of AI are truly limitless.


[1] Chat GPT: What is it?, Univ. of Cent. Ark., https://uca.edu/cetal/chat-gpt/ (last visited Oct. 29, 2023).

[2] What is the impact of Artificial Intelligence in our Daily Lives?, Jaro Educ., https://www.jaroeducation.com/blog/what-is-the-impact-of-artificial-intelligence-in-our-daily-lives/ (last visited Nov. 12, 2023).

[3] Gardiner Morse, Harnessing Artificial Intelligence, Harvard Bus. Rev., https://hbr.org/2020/05/harnessing-artificial-intelligence (last visited Nov. 12, 2023).

[4] EU: AI Act must ban dangerous, AI-powered technologies in historic law, Amnesty Int’l (Sept. 28, 2023), https://www.amnesty.org/en/latest/news/2023/09/eu-ai-act-must-ban-dangerous-ai-powered-technologies-in-historic-law/.

[5] European Commission: Overview, Eur. Union, https://european-union.europa.eu/institutions-law-budget/institutions-and-bodies/search-all-eu-institutions-and-bodies/european-commission_en (last visited Oct. 29, 2023); EU AI Act: first regulation on artificial intelligence, Eur. Parl. (June 14, 2023, 2:06 PM), https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.

[6] Eur. Parl., supra note 5; About, The Artificial Intelligence Act, https://artificialintelligenceact.eu/about/ (last visited Oct. 31, 2023).

[7] Eur. Parl. supra note 5; Regulatory Framework Proposal on Artificial Intelligence, Eur. Comm’n, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (last visited Oct. 31, 2023).

[8] Eur. Parl., supra note 5.

[9] Id.

[10] Eur. Comm’n, supra note 7.

[11] Eur. Parl., supra note 5.

[12] Id.

[13] Id.

[14] Id.

[15] Eur. Parl., supra note 5; Eur. Comm’n, supra note 7.

[16] Eur. Parl., supra note 5.

[17] Eur. Comm’n, supra note 7.

[18] Eur. Parl., supra note 5.

[19] Developments, The A.I. Act, https://artificialintelligenceact.eu/developments/ (last visited Oct. 31, 2023); MEPs Ready to Negotiate First-Ever Rules for Safe and Transparent AI, Eur. Parl. (June 14, 2023, 12:52 PM), https://www.europarl.europa.eu/news/en/press-room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transparent-ai.

[20] The Members of the European Parliament, Eur. Parl., https://www.europarl.europa.eu/about-parliament/en/organisation-and-rules/organisation/members (last visited Oct. 31, 2023).

[21] The Political Groups of the European Parliament, Eur. Parl., https://www.europarl.europa.eu/about-parliament/en/organisation-and-rules/organisation/political-groups (last visited Oct. 31, 2023).

[22] Id.

[23] Eur. Parl., supra note 5.

[24] Id.

[25] The A.I. Act, supra note 19; A European Approach to Artificial Intelligence, Eur. Comm’n, https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence (last visited Oct. 31, 2023); New Liability Rules on Products and AI to Protect Consumers and Foster Innovation, Eur. Comm’n (Sept. 28, 2022), https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807.

[26] Eur. Comm’n, supra note 25.

[27] Id.

[28] The A.I. Act, supra note 19; European Parliament Adopts Its Negotiating Position on the EU AI Act, Gibson Dunn (June 21, 2023), https://www.gibsondunn.com/european-parliament-adopts-its-negotiating-position-on-the-eu-ai-act/.

[29] Id.

[30] Richard Smirke, Big-Tech Lobbyists Are Trying to Weaken Copyright Protections in the EU’s AI Act, Billboard (July 19, 2023), https://www.billboard.com/pro/eu-ai-act-big-tech-lobbyists-weaken-copyright-protections/.

[31] Gibson Dunn, supra note 28.

[32] Gibson Dunn, supra note 28; Victor Li, What Could AI Regulation in the US Look Like?, Am. Bar Ass’n (June 14, 2023), https://www.americanbar.org/groups/journal/podcast/what-could-ai-regulation-in-the-us-look-like/.

[33] John Edwards, Why the US Risks Falling Behind in AI Leadership, Info. Wk. (Dec. 14, 2022), https://www.informationweek.com/cyber-resilience/why-the-us-risks-falling-behind-in-ai-leadership.

[34] Ivey Dyson & Faiza Patel, The Perils and Promise of AI Regulation, Brennan Ctr. (July 26, 2023), https://www.brennancenter.org/our-work/analysis-opinion/perils-and-promise-ai-regulation.

[35] Id.

[36] Id.

[37] U.S.-EU Joint Statement of the Trade and Technology Council, The White House (May 31, 2023), https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/31/u-s-eu-joint-statement-of-the-trade-and-technology-council-2/.

[38] Martin et al., U.S. Warns E.U.’s Landmark AI Policy Will Only Benefit Big Tech, Bloomberg (Oct. 5, 2023), https://www.bloomberg.com/news/articles/2023-10-06/us-warns-eu-s-landmark-ai-policy-will-only-benefit-big-tech?embedded-checkout=true.

[39] Id.

[40] Id.

[41] How Does China’s Approach to AI Regulation Differ From The US And EU?, Forbes (July 18, 2023, 2:21 PM), https://www.forbes.com/sites/forbeseq/2023/07/18/how-does-chinas-approach-to-ai-regulation-differ-from-the-us-and-eu/?sh=15e0bcd1351c.

[42] FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, The White House (Oct. 30, 2023), https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[43] Id.

[44] Algorithmic Discrimination Protections, The White House, https://www.whitehouse.gov/ostp/ai-bill-of-rights/algorithmic-discrimination-protections-2/ (last visited Nov. 18, 2023).

[45] The White House, supra note 42.

[46] Id.

[47] Id.

[48] Id.

[49] Craig Smith, Act One: Opposition Takes Center Stage Against EU AI Legislation, Forbes (Sept. 5, 2023, 8:00 AM), https://www.forbes.com/sites/craigsmith/2023/09/05/act-one-opposition-takes-center-stage-against-eu-ai-legislation/?sh=67e5644449b8.

[50] Id.

[51] Smirke, supra note 30.

[52] Smith, supra note 49.

[53] Id.

[54] Id.

[55] Id.

[56] Id.

[57] Id.

[58] Michael Bennett, The Future of AI: What to Expect in the Next 5 Years, TechTarget (May 25, 2023), https://www.techtarget.com/searchenterpriseai/tip/The-future-of-AI-What-to-expect-in-the-next-5-years.

Previous
Previous

College Football Realignment: A Look at the Pac-12 Lawsuit

Next
Next

Game-Changing Lawsuit: Exploring the Current NIL Litigation Against the NCAA