Artificial Intelligence

The materials provided here are for general informational purposes only and are not intended as legal advice on any subject matter. Do not act or refrain from acting on the basis of any of the content in this bulletin. The views expressed in the materials on this page are only the views of the authors in their individual capacity and not of the Innovation Clinic, the University of Chicago or any other student or faculty member. The appearance of these materials on this site does not constitute an endorsement by any such person. Neither the Innovation Clinic nor the University of Chicago makes any representation or warranty with respect to the accuracy, applicability, fitness or completeness of the content on this page.

What is the Right Corporation Form for Artificial Intelligence Companies?

By Ish Farooqui

May 9, 2024

A number of the most promising startups developing foundational models in the artificial intelligence space have chosen not to form as traditional for-profit corporations. OpenAI was initially incorporated as a non-profit and later created a capped for-profit subsidiary for fundraising purposes.[1] Another unicorn, Anthropic, has chosen to incorporate as a public benefit corporation (PBC) with an independent trust that will control the board of directors.[2] xAI, Elon Musk’s latest adventure, is a PBC. Inflection is too. All this raises the question of why some founders in the AI industry have opted to avoid the tried and tested corporate for-profit form, even though their companies do typical for-profit things like sell products, chase market share, and raise billions of dollars in venture capital.

If you asked the founders of OpenAI or Anthropic, they’d start with the potential downstream effects of this uniquely powerful technology on society, including (not to sound like an alarmist) the end of humanity as we know it. For-profit corporations only care about profits, the argument goes. But a nonprofit or a PBC has a public mission and can weigh the social benefits of AI against its social costs, leading to safer, more responsible development. That all seems quite reasonable on a first pass. But for-profits are more flexible than such founders give them credit for. And nonprofits and PBCs come with unique governance problems. In addition, the question of entity form might depend on our view of what the specific risks of AI are. How might each corporate form really fare in managing different AI risks?

It’s true that under Delaware law, corporations are profit-maximizing entities. Directors must rationally connect their decisions to the promotion of the value of the corporation for the benefit of its stockholders.[3] But this is far from a straitjacket, because a lot of things affect shareholder value, and Delaware courts will not second-guess a rational decision about what advances the interests of shareholders (this is the famed “business judgment rule”). As Chancellor Strine put it, “we do not require boards to measure their success against the moment-to-moment impulses of the stock market.”[4] So in practice corporations engage in all sorts of not obviously profit-maximizing activities like donating to charity or giving employees a raise. An AI for-profit could devote its resources to safety and research because that rationally promotes corporate value by increasing consumer confidence in the business, reducing the risk of product liability suits, and improving an AI system’s functionality and performance. So while it’s true that for-profit corporations have a duty to promote shareholder value, it does not mean that for-profits can’t adequately consider safety issues. In addition, one nice advantage of a profit-maximizing entity is that its goals are clear and measurable.

By comparison, nonprofits can pursue purposes that are so broad that they are effectively purposeless. OpenAI, a nonprofit, tells us its mission is to ensure that artificial general intelligence “benefits all of humanity.”[5] The company explains that it wants to “help the world build safe AI technology and ensure AI’s benefits are as widely and evenly distributed as possible.” It’s a pretty broad mission and naturally raises the question: is it meaningful? It’s not clear how the nonprofit board, or anyone, can figure out what is in the interest of all of humanity. How would the board weigh the loss of ten thousand jobs against the potential advancement of new lifesaving drugs? What about a million jobs? Or ten million? I don’t think there’s a measurable or principled way to figure out what is a benefit to humanity. A common adage in Silicon Valley is that you make what you measure.[6] So what are companies like OpenAI making? The truth is that OpenAI operates a lot like a typical for-profit tech company. It's run by a serial entrepreneur, has sold a hefty stake to Microsoft, is in a competitive fight with Google, pays some of its employees millions of dollars, and is expected to reap huge profits for its shareholders. As Matt Levine put it, “what’s weird about OpenAI is that it’s an $86 billion startup with nonprofit governance.”[7] And that should be troubling because of the state of nonprofit governance.

The problem with nonprofits, as a general matter, is that they tend to be governed much less effectively than their for-profit peers.[8] This is a structural issue as much as anything else. A nonprofit does not have shareholders. It’s not owned by anyone. This means the board of directors answers only to itself. An entrenched and self-perpetuating board is not what we consider good governance in the for-profit context. As expected, nonprofit directors are less engaged than their for-profit peers. As one legal commentator explained, nonprofit board members often “do not know what their job is” and “have no idea what they are to do individually and collectively.”[9] Nonprofit directors “are faulted for not knowing what is going on in their organizations and for not demonstrating much desire to find out. Attendance at board meetings is often spotty and participation perfunctory.”[10] This is particularly troubling in the AI context because the stakes are really high. Directors should be actively overseeing management and capable of making informed long-term decisions. The last thing we want is an AI company that operates like a for-profit with all the risks that entails but without any of the accountability mechanisms of a typical for-profit!

The situation is weirder for PBCs. On the one hand, legal commentators are less negative about PBC governance. On the other hand, PBCs have only been around for ten years so there is less to say about them. One positive is that PBCs have shareholders who can reign in wayward directors. One negative is that PBC directors have an even more confusing job than nonprofit directors. Imagine the nebulousness of an open-ended mission welded with a multi-factor balancing test around profit-maximization; that is the basis on which PBCs are supposed to be run. Under Delaware law, PBC directors have a duty to balance “the pecuniary interests of the stockholders, the best interests of those materially affected by the corporation’s conduct, and the specific public benefit or public benefits identified in its certificate of incorporation.”[11] However, PBC directors only owe fiduciary duties to shareholders, not the public or those affected by the business, meaning the board may be likely to prioritize the pecuniary interest of shareholders over the public benefit in any event.[12] So maybe the PBC form will have only a marginal effect on corporate decision-making as compared to a for-profit entity. Consider Anthropic, whose stated mission is the “responsible development and maintenance of advanced AI for the long-term benefit of humanity.”[13] President Daniela Amodei told Vox, “If the only thing that [potential investors] care about is return on investment, we just might not be the right company for them to invest in.”[14] That’s right, but also: Anthropic’s largest investors are Amazon and Google, who as for-profit corporations have a legal duty to only care about their return on investment. So is the PBC form just window-dressing? Will the balancing test lead to more arbitrary or self-interested decision-making by directors? Or will the PBC form offer useful discretion for corporations to act in the public interest? We’ll have to see what happens with Anthropic.

So far, we’ve discussed at a high level some differences between for-profits, nonprofits, and PBCs. But what about the specific risks of AI? Do they justify the tradeoffs in governance? I think it’s helpful to breakdown AI risk into three categories (I present them as distinct for the sake of analysis, but they are not mutually exclusive). I will discuss the risk that we create an uncontrollable superintelligence; the risk that an AI becomes misaligned with our goals; and the risk that AI is used by bad actors to do bad things. If you are curious about the world of AI safety, here’s a link for more.

The first risk is that we build an artificial superintelligence that so far exceeds the powers of the human mind that it poses an existential risk to humanity. Some believe this may happen quite suddenly without much forewarning, others predict a slow take-off over years, even decades. But the key concern is that we only get one try at building a safe superintelligence. If we miscalculate the design such that the super AI is not friendly or controllable, then it will kill us all. The underlying logic is that intelligence is inherently dangerous; it’s the reason humans conquered the planet and not our stronger primate cousins. I know this sounds like the plot to The Terminator, but it’s a genuine concern that AI companies are thinking about. So which corporate form can best handle the risk of superintelligence? In a fast take-off world where this all happens quite suddenly and unexpectedly, then it doesn’t really matter what the corporate form is of the AI company. Once we cross the Rubicon, we’re doomed. In a slow take-off world, there are obvious advantages to the nonprofit form. A nonprofit could commit to not building a technology beyond a set amount of computational power determined to be safe—and end of story, no one can force the board to do otherwise. For PBCs and for-profits, this is much trickier because of the presence of shareholders. Shareholders may push the board to keep going because either they discount the risks or believe it won’t happen in their lifetime. Maybe that’s an overly pessimistic view of shareholders, since destroying humanity would be bad for corporate value and shareholders (many of whom are people). Still, if you accept that a super intelligent AI is inherently uncontrollable and dangerous, then you should probably be worried about for-profits, PBCs, and frankly anyone working on AI.

A different risk relates to the alignment of AI to our values. Here, we turn over control of our resources to a powerful artificial intelligence system (it need not be super intelligent) and instruct it to accomplish a task. In performing that task, the AI takes a series of intermediate steps on the way to the ultimate goal that are socially harmful. The AI is not necessarily going to take over the world, but it could go haywire and cause accidents. As an example, imagine a farmer who asks an AI to help him maximize his crop yield. The farmer expects the AI will figure out the right seeds to buy, when to rotate his crops, and the optimal amount of watering. And maybe the AI does all that, but it also goes further. The AI knocks down the farmer’s house to make room for more crops, manipulates his neighbors into selling their fields, and bribes public officials to ignore environmental regulations. The AI has accomplished the goal of maximizing yield but wrought much harm on the way. When it comes to the alignment risk, there’s a case that for-profits are actually in the best position to solve the problem. The alignment problem is really a “build a better product” problem, which we know corporations are quite good at. Building an aligned AI system means building a product consumers will prefer because it has functionality with ideal product-market fit. The profit-seeking motive will encourage startups to solve alignment problems. And given the expense of building AI systems and the cost of hiring talent to test and tinker, a corporation or PBC will be better able to attract the capital to solve alignment than a nonprofit. In addition, a corporation will appropriately consider the risks of misaligned AI because of the threat of product liability suits. The farmer in our example can turn around and sue the AI company; and the farmer’s neighbor is likely to sue too. So the cost-benefit analysis made by the corporation on the proper amount of alignment is likely to be close to what’s socially optimal.

The final category of risk is AI misuse and externalities. These are the risks that a customer will use the AI to do bad things like spread misinformation or make a weapon. PBCs have an advantage over for-profits in managing this risk because the board has the discretion to balance pecuniary interests against the negative externalities of a misused AI. A for-profit can’t really consider external costs that do not affect the value of the business. So insofar as the for-profit is not liable for what customers do with the product, it would be free to ignore those costs or not fully internalize them. Courts and lawmakers can play a role in bridging this gap between PBCs and for-profits by imposing liability on AI companies, forcing them to consider the social costs of misuse. Here’s a slightly different risk: an AI company has created a product that will be extremely profitable but will also wipe out ten million jobs. Should it market the product? A for-profit basically has a duty to do so because of the expected profits. A PBC could choose not to. But PBCs also have to consider their shareholders who may be upset enough with that decision to vote the directors out. In the end, we probably feel more comfortable with a PBC than a for-profit when it comes to misuse and negative externalities.

Given the risks of AI, it’s reasonable for AI founders to bake into their corporate charters a public mission and sense of responsibility in developing the technology. But that can create a governance problem if the board is given so broad a mandate that it can do whatever it likes. The problem is magnified by the fact that these AI companies operate for all intents and purposes like typical for-profit companies, but without some of the mechanisms that discipline for-profit boards and management. Still, we may find that the risks of AI justify the governance tradeoff. One thing a PBC or nonprofit can do that a for-profit can’t is forthrightly consider the negative externalities of the technology on society. Given the predictable disruptions to come, maybe there is hope after all in the PBC.

[1] For information on the capped-profit subsidiary, see

[2] The unique structure of Anthropic’s trust is explained here:

[3] eBay Domestic Holdings, Inc. v. Newmark, 16 A.3d 1, 34 (Del. Ch. 2010).

[4] Leo E. Strine, Jr., “The Delaware Way: How We Do Corporate Law And Some of the New Challenges We (And Europe) Face,” 30 Delaware Journal of Corporate Law 673, 681 (2005).

[5] For more on the mission, see

[6] See

[7] Matt Levine, “Open AI Is a Strange Nonprofit,” Bloomberg,

[8] George W. Dent, Jr., “Corporate Governance Without Shareholders: A Cautionary Lesson From Non-Profit Organizations,” 39 Del. J. Corp. L. 93 (2014).

[9] Id. at 99.

[10] Id.

[11] Delaware General Corporation Law § 365(a).

[12] Delaware General Corporation Law § 365(b).

[13] For Anthropic’s discussion of its mission:

[14] Dylan Matthews, “The $1 billion gamble to ensure AI doesn’t destroy humanity,” Vox,

From Brussels to Washington: How the EU AI Act Can Inform US Regulatory Strategy

By Audrey Lee, Sarah Pak

May 19, 2024

(1) Introduction

In an era of rapidly evolving artificial intelligence (AI) technologies, nations face the critical task of capturing the transformative potential of AI while mitigating its inherent risks. The impact of failing to answer this challenge recently came into sharp focus at a biological arms control conference where researchers from Collaborations Pharmaceuticals, a drug developer for rare and neglected diseases, presented a chilling discovery: their AI-powered molecule discovery software could be repurposed with alarming ease to produce chemical weapons.[1] In less than six hours, their AI system, MegaSyn, initially designed to aid in drug discovery, was manipulated to generate 40,000 toxic molecules, including known chemical weapons, new toxic compounds, and VX, the most lethal nerve agent ever developed. Their findings expose the precarious nature of AI technology, in which systems designed in good faith to generate significant societal benefits can be readily repurposed to cause significant harm.

Globally, comprehensive regulatory responses to these risks are uneven and sparse, with one notable exception: the European Union (EU). The EU, a regulatory environment known for its ardent protection of consumer rights, has spearheaded comprehensive AI legislation through the enactment of the EU AI Act, passed on March 13th, 2024.[2] The EU AI Act aims to mitigate risks and promote responsible uses of AI by requiring greater transparency, accountability, and human oversight, both during the development and use of AI systems. As the EU’s regulatory framework goes into effect and AI technologies continue to gain rapid adoption worldwide, the need for a robust regulatory regime in the United States has never been greater. As a first mover in this space, the EU’s AI Act provides a valuable blueprint for US policymakers to learn from in developing their own response to the risks associated with AI. This paper argues that by understanding the structure, enforcement mechanisms, and foundational values of the EU AI Act, the United States can develop an informed and effective strategy tailored to its unique legal and social landscape. In addressing its own immediate regulatory gaps, the US can set a parallel global precedent for future technological governance that either mirrors or distinguishes itself from the regime created in the EU.

(2) Dissecting the EU's Regulatory Framework for AI

The EU AI Act is a pioneer in the design of a legal framework to govern the development, deployment, and use of AI within a single economic market shared by multiple nation states. The primary purpose of EU AI Act is to ensure the technology's safe and ethical integration into society by prioritizing the protection of European citizens. The EU AI Act does so in the face of clear commercial incentives to the contrary that encourage AI companies to blaze forward in the pursuit of profit. Through a flexible scheme that adapts to the contours of each company’s AI system, the new regime introduced by EU AI Act works to address the negative externalities that may arise from this powerful technology operating in an unregulated market.

Before diving into its specific components, it is important to recognize upfront that EU AI Act is an EU regulation rather than a directive, which has significant implications for its implementation across member states. As a regulation, the EU AI Act is directly applicable and enforceable in all EU member states without the need for national transposition. This ensures uniformity in the legal framework governing AI across the EU, providing consistent standards and obligations for AI developers and users. Unlike directives, which allow member states flexibility in how they achieve the objectives set out, regulations impose immediate and binding legal obligations. Consequently, member states cannot opt out of or delay the implementation of EU AI Act, leading to fast and cohesive adoption of AI regulations throughout the EU. This approach minimizes legal fragmentation and ensures a level playing field for AI innovation and compliance within the internal market. Nonetheless, an interesting potential parallel may develop between the EU’s approach to regulating AI and the EU’s regulations on data privacy through the General Data Protection Regulation (GDPR), which, despite its stringent provisions, has seen somewhat selective enforcement across member states with more lenient tolerance of privacy violations than initially anticipated. Economist Tyler Cowen suggests that this selective enforcement could signal a similar fate for the EU AI Act as its perceived stringency might be tempered by practical enforcement realities, much like the GDPR.[3] This perspective suggests that while the regulation sets a high standard, its real-world implementation may involve a degree of flexibility, balancing regulatory goals with practical enforcement considerations.

At its heart, the EU AI Act employs a risk-based approach to AI regulation that categorizes AI systems based on the level of threat they might pose across four specific risk levels: minimal or no risk, limited risk, high risk, and unacceptable risk. The “minimal or no risk” category encompasses AI systems with use cases that have minimal potential to cause harm or infringe upon fundamental rights, such as AI used to enhance video game graphics, or spam filters. These minimal risk AI systems are subject to no specific requirements under EU AI Act given that they typically perform narrow tasks with limited decision-making capabilities and deal with non-sensitive data.

The first requirements laid out in the EU AI Act apply to “limited risk” AI systems, such as chatbots or certain emotion recognition tools, that have the potential to manipulate or deceive users, but are not considered high-risk enough to warrant the most stringent regulatory requirements. The primary mandate for companies operating limited risk systems is that they must clearly inform their users that they are engaging with an AI-powered technology. Additionally, developers of such AI technologies are required to disclose to users that they are interacting with AI and give users the ability to opt to interact with a human representative instead of the AI system. This transparency requirement aims to empower users and prevent any unintended deception or confusion.

The EU AI Act's “high-risk” category represents the most heavily regulated tier of AI systems within the legislative framework and likely encompasses many AI applications already in use today. Unlike other categories, the approach to high-risk AI technologies moves from a focus on functionality and purpose to a focus on industry and sensitivity, regardless of functionality. This category is defined to apply to AI that operates in critical domains that have the potential for high impact on fundamental rights for Europeans, such as transportation, education, employment, law enforcement, and access to essential services. Examples of AI systems that fall within this category are biometric identification systems, AI-powered recruitment and employee management tools, and healthcare diagnostic algorithms. Providers of these high-risk AI systems face a robust set of compliance obligations before they can be placed on the EU market or put into service in the EU. First and foremost, they must conduct thorough fundamental rights impact assessments to ensure their AI systems respect European Union values and uphold the rights of citizens in compliance with other key acts passed by the European Parliament. Additionally, these providers are required to implement stringent data governance and record-keeping practices, maintaining detailed technical documentation that can be scrutinized by regulatory authorities. The technical documentation component requires AI companies to disclose a general description of their AI system, including its purpose, developer details, version history, interaction capabilities with other hardware or software, its development process, design specifications, system architecture, data requirements, human oversight measures, performance limitations, accuracy levels, and any lifecycle modifications or changes to the standards of the model. An EU declaration of conformity must be included in this documentation to confirm compliance. High-risk AI systems must also undergo third-party conformity assessments to verify their compliance with the EU AI Act's requirements. These assessments are conducted by notified bodies, which are independent organizations designated by EU member states. Notified bodies can be either private companies or public entities, but they must be accredited and meet specific criteria set by the EU to ensure impartiality and independence. Furthermore, AI companies in this tier are mandated to register their systems in an EU-wide public database, facilitating oversight and monitoring by relevant governing bodies, such as the European Data Protection Board (EDPB), national Data Protection Authorities (DPAs), and the European Commission. Crucially, any incidents related to the high-risk AI system that could impact health, safety, or fundamental rights must be promptly reported to the relevant authorities. This high-risk category is widely regarded as the most controversial and burdensome component of the EU AI Act because it imposes significant compliance obligations on organizations developing and deploying these AI systems in use cases where the value of such systems is arguably at its greatest.[4] However, the EU views these stringent requirements as necessary to mitigate the potential harms and risks associated with high-impact AI applications, prioritizing the protection of its citizens over technological advancement.

At the most extreme end of the spectrum are the “unacceptable risk” AI systems, which are outright banned under the EU AI Act. These systems are categorically prohibited due to their significant potential for harm, to safeguard against technologies that could exploit personal vulnerabilities or manipulate behavior beyond the ethical boundaries set by the EU. Applications banned under this category include AI that could subvert human autonomy, such as systems designed for deep social scoring, which evaluate individuals based on various personal characteristics, or technologies capable of manipulating human behavior to circumvent free will. Another banned activity addressed is the untargeted scraping of facial images from the internet or CCTV footage, which the EU feels poses serious privacy and civil liberties issues. The legislation specifically bans biometric categorization systems that make the use of sensitive characteristics, such as political or religious beliefs, sexual orientation, and race, due to the discriminatory risks they pose when using these sensitive characteristics in their algorithm. Emotion recognition technologies, particularly when used in the workplace or educational settings, are also banned, reflecting the EU’s commitment to protecting individuals from invasive surveillance technologies that could impact mental health and well-being. The use of biometric identification systems, in particular, was a contentious issue, with law enforcement agencies ultimately being granted limited exemptions under strict conditions, such as to respond to serious crimes like terrorism and kidnapping. Police may only use these systems in a manner that is strictly confined in both time and space, ensuring that their deployment does not infringe on broader civil liberties more than is absolutely necessary.

Due to the unique challenges posed by general-purpose AI (GPAI) models, the EU AI Act also introduced separate requirements for these highly versatile and influential systems. The EU AI Act outlines that GPAI models must adhere to a range of supplementary obligations if they are deemed to have “high impact capabilities”– defined to mean models trained using a total computing power of more than 10^25 floating-point operations (FLOPs), the quantifiable point at which AI is considered to carry systemic risk because of its immense computing power.[5] These additional requirements include conducting model evaluations, assessing and mitigating systemic risks, performing adversarial testing, reporting all non-compliant incidents to the European Commission, and implementing enhanced cybersecurity measures. In contrast to the risk assessments mandated for high-risk AI systems, the systemic risk assessment for GPAI models under the EU AI Act involves a more comprehensive and nuanced effort that requires companies to consider the broader implications of deploying their highly versatile and powerful AI systems across various sectors and societal functions. One part of the assessment is a systemic impact analysis, which evaluates the potential widespread effects of GPAI models to anticipate and manage any disruptions or negative impacts they may cause. Another involves analyzing the AI system’s adversarial robustness, where rigorous adversarial testing is conducted to determine the model's vulnerability to malicious attacks, and implementing appropriate measures to mitigate these risks. Another critical component is the examination of interoperability and integration risks. This process scrutinizes how the GPAI model interacts with other systems and technologies, identifying potential risks from these interactions, and developing strategies to mitigate them. Enhanced cybersecurity measures are also a crucial part of the assessment. Collectively, these additional levels of scrutiny are designed to protect the GPAI models from sophisticated threats that could exploit their capabilities.

Importantly, the EU AI Act's reach extends beyond the EU's borders, as it applies to any entity that offers AI services to the European market, regardless of the entity’s geographic location. This means that US-based companies, for example, may be subject to the EU AI Act's potential penalties for non-compliance, which can range from €7.5 million or 1.5% of global revenue, whichever is greater, up to €35 million or 7% of global revenue, depending on the severity of the infringement. The hope is that these substantial penalties will operate as effective enforcement tools that dissuade companies from non-compliance. By establishing these harmonized rules and effective enforcement measures across the EU, the EU AI Act aims to foster a single market of standards for AI in Europe, ensuring the responsible development and use of these technologies within its member state borders.

(3) Biden’s Executive Order: The US’s Policy Considerations

On October 23, 2023, President Biden released an Executive Order (EO) on AI to advance the safe and secure development of AI.[6] Unlike the EU AI Act, the EO does not enact specific regulations or describe discrete categories of AI. Instead, the EO describes eight overarching policy areas that are relevant to, or that require analysis to inform, future AI regulation: Safety and Security; Innovation and Competition; Worker Support; Consideration of AI Bias and Civil Rights; Consumer Protection; Privacy; Federal Use of AI; and International Leadership.[7] The EO then directs relevant federal agencies to create guidelines and strategies rather than outlining regulations itself.

For each category, the EO discusses general policy considerations and instructs agencies on the next steps they should take. Since the President has authority over federal agencies, the EO requires different government agencies to provide certain deliverables. Most of the deliverables are vague initial steps towards an eventual regulatory framework; for instance, the EO instructs the Department of Homeland Security to evaluate the potential for AI to be used to develop, produce, or counter chemical, biological, radiological, and nuclear (CBRN) threats. Many of the agencies are instructed broadly to start evaluations or research on AI.

The clearest directive regarding regulation appears within the Safety and Security category. The National Institute of Standards and Technology (NIST) was tasked in the EO with releasing guidelines and best practices for AI companies by April 2024. On April 29, 2024, the NIST released a draft publication of the AI Risk Management Framework (AI RMF), specifically tailored to address the risks associated with Generative AI. This draft, known as the AI RMF Generative AI Profile, serves as a tool for organizations to recognize and mitigate the unique risks posed by generative AI technologies, aligning actions with their specific goals and priorities. Crafted over the past year, this guidance draws upon insights from NIST's generative AI public working group, comprising over 2,500 members. It offers a comprehensive approach, presenting a list of 12 identified risks and proposing more than 400 actionable measures for developers to manage them effectively.[8] However, even this draft is just guidance meant for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems, rather than binding regulation.[9]

The EO represents a preliminary, principles-based approach to the advancement of AI, advocating for responsible development through comprehensive guidelines that underscore the importance of safety, innovation, and ethical considerations.[10] Rather than rigidly prescribing specific regulations, the EO outlines overarching priorities aimed at enhancing various facets of AI deployment. These priorities encompass bolstering AI safety and security measures, taking a cautious approach to protect a culture of innovation to drive technological progress, and ensuring robust protections for individuals' privacy rights. By taking a broad-strokes approach and waiting for information gathering exercises to conclude before definitively moving in one way or another, the EO allows for adaptability in addressing the multifaceted challenges and opportunities presented by AI. This flexibility is reflected in the regulatory landscape it fosters, which encourages voluntary compliance and the emergence of industry-led standards. Embracing this dynamic framework not only accommodates the rapid pace of technological change but also promotes collaborative efforts between government, industry, and other stakeholders to collectively shape the responsible development and deployment of AI technologies. Through this approach, the EO seeks to begin the conversation to strike a delicate balance between regulatory oversight and the cautious protection of innovation, hopefully positioning the United States as a leader in the global AI landscape while safeguarding against potential risks and ensuring the ethical use of AI technologies.

In short, the EO seems to leave room for innovation as much as it raises safety considerations. The development of AI is undeniably an international race, with countries around the world investing heavily in research, development, and deployment of AI technologies. As a global leader in technology and innovation, the United States is in a prime position to maintain and strengthen its competitive edge in this race. Focusing on developing AI not only enhances America's economic competitiveness but also bolsters its national security and strategic interests.

As it stands, however, the EO fails to provide clear regulations or enforcement mechanisms for protecting against malicious uses of AI. The US may need to find ways to incorporate more transparent protections for its citizens while promoting innovation.

(4) The Achievements of the EU AI Act

At the outset, there is something to be said about the EU’s position as a trailblazer in its regulation of AI. The first mover advantage on technological regulation is significant; in requiring compliance by any companies wishing to operate in the EU, the EU AI Act undeniably has extraterritorial effects in legislating this unsettled and interdisciplinary matter by forcing US, Chinese and other companies to conform with values-based EU standards before their products might enter the European market. For better or for worse, the EU AI Act provides direction for the US and other countries to consider if and when they pass their own regulations.

Perhaps most importantly, the EU AI Act offers comprehensive protection for its citizens. It grants more rights and protections with respect to AI to the European citizenry than to anyone in the world, with that protection taking form in, among other things, the right for consumers to file complaints where they believe that comprehensive regulation protection has been violated.[11] In granting mechanisms for human oversight and the right to challenge AI decisions, the EU AI Act empowers individuals to protect their rights and ensure AI systems do not override human autonomy, thereby providing safeguards that would undoubtedly be met with support by populations that are alarmed by the emergence and integration of AI into daily life.

Furthermore, the EU AI Act’s approach of categorizing AI systems based on risk levels, when viewed in a favorable light, allows for tailored regulatory measures proportional to the potential threats posed by various AI applications.[12] Tailored responses to AI challenges and developments will likely be the only way to allow for innovation while safeguarding against harmful uses. Additionally, the EU AI Act prohibition against practices deemed to pose significant risk to personal rights reflects a robust commitment by the EU to uphold ethical standards in AI development and set clear legal boundaries against potentially oppressive technologies.

The EU AI Act’s focus on transparency and accountability, vis a vis mandates for documentation and data governance, reinforces the EU priorities in ensuring AI systems are understandable and scrutinizable by users.[13] The requirements foster accountability for AI providers for their systems’ impacts, thus building public trust in AI technologies, which may be particularly important given the doomsday discourse that has emerged surrounding AI. Similarly, the EU AI Act encourages active engagement with stakeholders, including industry leaders, academic experts, and civil society.[14] Particularly in regulating an interdisciplinary tool such as AI, engagement is crucial to capture broad ranges of insight and experiences to inform responses to practical challenges and opportunities for improvement in real-time.

Finally, the EU AI Act strives to be both future-proof and adaptable. In establishing the European Artificial Intelligence Board to both implement and enforce its substance, as well as the inclusion of adaptive measures for emerging technologies, the EU AI Act strives to evolve alongside technological advancements.[15] The EU AI Act includes provisions for regular reviews and updates to policies and classifications. The use of an iterative process ensures regulations will remain relevant and up to date in light of new AI developments, which the EU AI Act explicitly addresses, particularly with respect to GPAI.

Although the EU approach to regulation has been criticized as stringent and overbroad, that does not necessarily quash innovative and revolutionary uses of AI; Mistral, a French company selling AI capable products, shows how public entities like the government can use AI to improve upon public administration and government efficacy. Mistral is used to monitor environmental pollutants and manage data related to air quality.[16] This AI-driven system helps to enforce environmental regulations by predicting pollution levels, identifying potential non-compliance scenarios, and optimizing inspections and enforcement actions. Where the regulatory scheme promotes enhanced monitoring, compliance, transparency, and data driven decision making, similar systems to Mistral may be developed under the EU AI Act to enhance said monitoring and compliance across various sectors, including but not limited to air quality and industrial admissions, particularly when backed by AI analysis of vast datasets and accessible insights and enforcement actions.

(5) Critiques of the EU AI Act

Critics say that the EU AI Act overregulates generative AI, which is currently the centerpiece of the international AI race. Perhaps the EU has preemptively taken itself out of the race because it is not one of the major competitors either in terms of technology in development or market size, and because GDPR already effectively prohibits generative AI from operating successfully in the EU, when enforced to its letter.

Another popular criticism of the EU AI Act is that the “high risk” classification is over-inclusive and can include applications that are not actually dangerous. Therefore, the EU AI Act may in practice restrict AI innovation and certain applications unnecessarily. “High risk” AI includes, for example, AI utilized in credit scoring, human resources, medical devices and infrastructure, but there may be many low-risk ways of deploying AI in these use cases to great effect, particularly when the AI is paired with appropriate human supervision rather than allowed to run unchecked.

Furthermore, its focus on reducing discrimination creates room for overregulation. The EU AI Act aims to reduce the risk of AI perpetuating biases through a vague regulatory standard that may lead to overly broad and restrictive interpretations. The restrictions’ focus on discrimination is further arguably misplaced: discrimination whether facilitated by machine learning or personal human bias yields similar outcomes, and existing anti-discrimination laws may suffice to regulate and reduce bias in either instance.

The EU AI Act additionally places a significant regulatory burden on small companies and startups, the entities that drive much economic activity. The EU AI Act did aim to create a more flexible, separate regulatory regime for GPAIs. Nonetheless, compliance with other provisions, especially those relevant to technologies designated as "high risk" if GPAI is used in the relevant industries, or the compliance standard for GPAI in and of itself, and the other compliance standards for “high risk” AI present significant challenges for developers of even well-aligned GPAIs.[17] These increased compliance costs will weigh most heavily on start-ups who fail to garner billion-dollar investments from technology conglomerates, further suffocating the market of innovation by increasing barriers to entry for disruptive newcomers. Even well-capitalized companies may find it necessary to divert resources towards compliance rather than research and development. As a result, the EU AI Act risks stifling creativity and hindering the emergence of transformative AI technologies that could benefit society. For instance, French President Emmanuel Macron stated his concerns about the EU AI Act as such: “When I look at France, it is probably the first country in terms of artificial intelligence in continental Europe. We are neck and neck with the British. They will not have this regulation on foundational models. But above all, we are all very far behind the Chinese and the Americans.”[18] He was referencing the need to protect the French AI startup Mistral. The development of AI is an international race, and the strict regulations will make it difficult and highly costly for startups to comply with the EU AI Act. His concerns echo most loudly in the minds of smaller companies with minimal initial funding that may now be deterred from launching in Europe.

Such a strict restriction on AI at the developer level may also be unnecessary or inadequate as some level of AI safety, including generative AI’s safety, depends in part on the user. Generative AI, for example, is highly sensitive to how the users prompt it, making it difficult for systems to preemptively comply with the current EU AI Act regulations as they stand. The requirement for premarket approval and the stringent compliance measures prescribed for high-risk AI systems create a burdensome regulatory environment and may fail to properly ensure compliance after users cleverly prompt around safeguards. The anecdote above regarding toxic molecules is one such example.

The EU AI Act does well with outlining clear regulations and creating categories. The US could gain inspiration from the successful clarity of the EU AI Act. Simultaneously, from the EU AI Act’s shortcomings, the US may learn to be more cautious and avoid restricting the growth of AI too harshly.

(6) Recommended Essential Elements for an Effective US AI Regulatory Regime

In the legal sandbox of AI regulation, divergent interests between democratic, economic, and social values mean that tradeoffs must be made. Using the EU AI Act as guidance or as a cautionary tale, US regulatory authorities will have to make judgment calls on how well or poorly the EU makes those tradeoffs, particularly as informed by US-specific priorities. For better or worse, the EU AI Act points AI legal development in a direction, and other countries are now presented with the option to follow in the EU’s legal footsteps, or force companies to choose between being selective in where they operate and complying with different sets of rules for different markets.

On the one hand, a harmonization of the standards set between international bodies in the form of a worldwide adoption of similar regulations may increase responsible business practices. Conformity by the US with the standards now set by the EU may prevent forum shopping, wherein companies seek the most favorable regime as motivated by financial interests, often at the expense of consumers, competition, the environment, and even society as a whole. The EU AI Act clearly has extraterritorial effects by compelling US and Chinese AI companies to conform with values-based EU standards before their AI products might enter the European market. Should the US use the EU method as a model, it may help to prevent a fragmented regulatory environment and promote a level playing field, particularly in the Western World, given that other AI trailblazers like China pursue AI development very differently based on strong state control and influence on their technology sector. Alignment of AI regulations offers clear benefits: it simplifies the global operational landscape for AI companies and ensures that consumer protections are uniform which may, among other things, increase public trust and acceptance of AI technologies.[19] The US, however, must keep in mind that it is at the head of the race along with China and should avoid overregulation of AI.

On the other hand, the EU AI Act might make render other, less stringent regulations more or less relevant, depending on the behavior of major AI companies. Given that companies will not likely design two versions of products so as to conform with divergent regulations, those companies may then be forced either to design to the highest denominator, or to not operate in a given environment. Said differently, companies may conform with the EU AI Act, perhaps limiting themselves in development, or decline to operate in the EU at all.

The EU AI Act is not the first time that the EU has trailblazed a regulatory approach which emphasized precaution, perhaps at the expense of technological advancement – see our discussion of GDPR above.[20] A common critique of GDPR is its potential chilling effect on business, particularly those outside of Europe that find the compliance costs prohibitively high, the so-called “Brussels Effect.” Large language models like ChatGPT may be effectively unable to operate in the EU, because those models don’t have consent to draw citizen data from the internet as required under GDPR. In this way, some argue that the rigorous requirements of GDPR stifle innovation by making it more difficult and expensive for companies to exploit data-driven strategies.[21] Therefore, while the EU approach ensures high standards of consumer and data protection, it may hinder the agility of businesses to innovate.

With the Brussels Effect in mind, the US should view the EU's overly stringent regulation as a cautionary tale that can inhibit economic and innovative growth. By limiting adoption of key components of the EU AI Act, the US can avoid forcing companies into behavior that prioritizes consumer protection at the expense of profitability and practical growth at a time when the industry is growing exponentially worldwide, but can still enact meaningful protections for US citizens that stop short of EU-level protections. Current US policy appears to be increasingly friendly to the domestic AI industry, placing an emphasis on best practices and relying on preexisting agencies to craft their own rules in their subject matter domains. This nuanced approach effectively localizes regulation to each sector of the economy. President Biden’s EO outlines a framework that is also concerned with safety, security, and ethical guidelines, but does so with an emphasis on maintaining and enhancing US leadership in AI innovation. The US has an opportunity to position itself as a leader in AI by crafting policies that not only promote safety and ethics, but also cultivating a competitive market.

The US could adopt a balanced approach which safeguards against the most severe AI risks without stifling innovation. It may be more logical for the task of tackling specific regulatory challenges of AI to fall to existing US governmental agencies, such as the FDA or FTC, depending on the specific aspect of AI being regulated. Rather than invest US resources in developing a comprehensive regulatory mode of AI, that expenditure may well be better suited to invest into existing federal agencies, allowing them to own AI regulation in areas over which they are already experts, just applying that expertise to AI. Those federal agencies may be the best suited actors to determine what AI regulation is appropriate in their own space. Those agencies would then be better served with more narrow regulation for more catastrophic risks, and reacting in time with the development of AI, saving potential comprehensive regulation for a time when the fog has cleared.[22]

In short, through its expansive scope and broad regulation, the EU may just be ensuring that AI research and operation is done elsewhere. To avoid hamstringing the AI industry in the US, as the EU AI Act may do in the EU by being overly broad, the US would be best suited in a more tailored regulatory approach that works with preexisting oversight mechanisms already in place to react flexibly as AI impacts a wide array of sectors as the dust settles in the development of major AI technologies.

(7) Conclusion

There is no question that the EU AI Act establishes a model AI regulatory regime that the US can learn from as its policymakers grapple with addressing the same core questions about safely developing AI. As detailed throughout this paper, the EU AI Act's structured approach to categorizing AI systems by risk level, and its stringent requirements for high-risk applications, exemplify a strong commitment to safeguarding public safety and ethical standards. Moreover, the EU AI Act's broad extraterritorial impact challenges global AI developers who offer their products and services in Europe to comply with high standards of operation, which may effectively raise the bar for AI governance worldwide. While the EU has taken a decidedly precautionary stance that may limit some aspects of AI innovation, this approach provides a significant counterbalance to the often unchecked expansion seen in other regions. As the US considers its own regulatory framework, regulators would do well to consider a balanced approach that incorporates stringent protections where necessary while fostering an environment that is conducive to technological advancement and business development. If state and Federal US government agencies like the SEC, DoD, or other regulators take the time to analyze the EU AI Act and extract any lesson learned from this European precedent to the American legal and economic landscape, then the US can ensure that it not only competes on the global stage but does so in a manner that is similarly safe, ethical, and aligned with democratic values. The insights provided by the EU's first-mover experience have the potential to be more than instrumental in shaping a proactive and principled US strategy in the face of rapidly evolving AI technologies. Until the industry has matured to a point where Congress decides to end its silence on AI (definitively, rather than with the legislation introduced last week), this era-defining technology will continue to see its expansion be checked by America’s federalist system with preexisting agencies at the forefront of addressing how AI intersects across all dimensions of American society, responding to the need for tailored regulations that safeguard our collective future.

[1] Urbina, Fabio, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins. “Dual Use of Artificial-Intelligence-Powered Drug Discovery.” Nature Machine Intelligence 4, no. 3 (2022): 189–91.

[2] European Commission. “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence” Artificial Intelligence Act, (2024).COM/2021/206 final.

[3] Cowen, Tyler. “AI’s Greatest Danger? The Humans Who Use It.”, January 25, 2024.

[4] Guzman, Hugo. “Innovation Concerns Grow over EU AI Regulation.”, 7 Sept. 2023,

[5] Kourinian, Arsen. “Data Security, Professional Perspective - Regulation of AI Foundation Models.” Bloomberg Law, Bloomberg, Feb. 2024,

[6] “Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.” The White House, Oct. 2023,

[7] “Highlights of the 2023 Executive Order on Artificial Intelligence for Congress.” Congressional Research Service, 3 Apr. 2024,

[8] “Artificial Management Risk Management Framework.” National Institute of Standards and Technology, Apr. 2024,

[9] “AI Risk Management Framework.”National Institute of Standards and Technology, Apr. 2024,

[10] Petrosyan, Lusine. “A Tale of Two Policies: The EU AI Act and the U.S. AI Executive Order in Focus.” Trilligent, 26 Mar. 2024, ​​,and%20deployment%20of%20AI%20systems.

[11] European Commission. “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence” Artificial Intelligence Act, Article XXII (2024).COM/2021/206 final.

[12] EU AI Act Art V-VI.

[13] Article XIII on transparency and provision of information to users, Article X on data and data governance

[14] There are Holes in Europe’s AI Act - and Researchers Can Help to Fill them, Nature, 2024, available at

[15] EU AI Act Article 56.

[16] Mistral, available at

[17] Hammond, Samuel. “Europe Blunders on AI.” City Journal, 22 Mar. 2024,

[18] Davies, Pascale. “Potentially Disastrous For Innovation: Tech Sector Reacts to the EU AI Act Saying It Goes Too Far.” Euronews, Dec. 2023,

[19] Mauritz Kop, EU Artificial Intelligence Act: The European Approach to AI (2021), available at

[20] The EU’s General Data Protection Regulation (GDPR), Bloomberg Law, 2024, available at

[21] Suzan Slijpen, Mauritz Kop & I. Glenn Cohen, EU and US Regulatory Challenges Facing AI Healthcare Innovator Firms, Petrie-Flom Center Blog, Harvard Law School, 2024, available at

[22] Cory Coglianese, How to Regulate Artificial Intelligence, The Regulatory Review (2024) available at