tech image

Biden Executive Order seeks to govern the “promise and peril” of AI

Article
|
8 min read

White & Case Tech Newsflash

On October 30, 2023, United States President Joseph Biden signed Executive Order 14110 on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence."1 The Order is the culmination of ongoing efforts by the Biden Administration to articulate its policies and priorities on AI.2 Sweeping in scope and addressing agencies across industries and sectors, the Order is premised on the understanding that "[h]arnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks."3

While the Order applies primarily and most immediately to federal agencies, it includes an important provision for foundation model developers and more generally illustrates the Biden Administration's vision for how it intends to pursue AI development and regulation while federal legislation remains forthcoming.

New standards for AI safety and security

Section 4 includes some of the Order's most novel and notable requirements, setting out detailed directives for the development of new standards, tools, testing protocols, and best practices for AI safety and security.

  • Mandating the development of federal standards: Section 4.1 provides that, within 270 days of the date of the Order (i.e., July 26, 2024), the National Institute of Standards and Technology ("NIST") shall establish guidelines and best practices, including setting standards for "red-team testing," defined in Section 3 to mean structured testing efforts, often through adversarial methods, to identify flaws and vulnerabilities associated with the misuse of the AI system.
  • Requiring developers of the most powerful AI systems to share safety tests results and other critical information with the U.S. government: Section 4.2 outlines reporting requirements for AI model owners and large data centers. The Order directs the Secretary of Commerce, within 90 days of the date of the Order (i.e., January 28, 2024), to require "companies developing or demonstrating an intent to develop potential dual-use foundation models" (defined in Sec. 3) to provide detailed information about their activities and models to the federal government on an ongoing basis. This includes the results of any developed dual-use foundation model's performance in relevant AI red-team testing based on the guidance developed by NIST.4 By applying to "companies developing or demonstrating an intent to develop" such models, the Order seems to contemplate that information subject to the reporting requirements must be shared with the federal government before the relevant AI systems are made available to the public. Within the same 90 days, the Secretary of Commerce is also directed to require companies to report their acquisition, development, or possession of large-scale computing clusters, including the existence and location of such clusters and the total amount of computing power available in each.5 Section 4.2(b) sets forth interim criteria to identify the minimum threshold for foundation models and computing clusters that would be subject to the reporting requirements.6 While Section 4.2 explicitly invokes the authority under the Defense Production Act, a law traditionally used during times of war or national emergencies such as the COVID-19 pandemic, it does not cite a specific provision.
  • Developing methods to detect and denote AI-generated content: Section 4.5 articulates requirements for reducing the risks posed by "synthetic" – i.e., AI-generated – content. The Order requires the Department of Commerce to develop guidance for content authentication and watermarking to clearly label AI-generated content. The fact sheet on the Executive Order released by the White House specifies, "Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world."7

Mitigating the risks posed by AI while encouraging innovation across sectors

The organizing principle of the Order is the Biden Administration's desire to balance the unique risks of AI against the novel benefits. While privacy is a recurring theme throughout the Order, Section 9 is dedicated to privacy and includes specific directives to strengthen privacy-protecting technologies. For example, the Director of the Office of Management and Budget is directed to evaluate commercially available information ("CAI") procured by agencies, including CAI procured from data brokers and CAI procured and processed indirectly through vendors, with a particular emphasis on CAI that contains personally identifiable information.8 In its fact sheet on the Executive Order, the White House notes: "To better protect Americans' privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids. . . ."9

Dedicated to civil rights, Section 7 includes detailed directives to government agencies to address and prevent unlawful discrimination and other harms that may be exacerbated by AI in the criminal justice system, the administration of government benefits and programs, and other areas such as hiring and housing. Section 8 lays out additional protections for consumers, patients, passengers, and students.

The Order also devotes lengthy sections to the efforts the federal government must undertake to position the United States as a global leader in AI, including calls to catalyze AI research across the United States (see Sec. 5.2) and encouraging the FTC to exercise its authorities to help small businesses commercialize AI breakthroughs (see Sec. 5.3). While the Order includes directives to expand the recruitment efforts of "AI talent," including highly skilled immigrants by updating and streamlining visa criteria and processing (see Sec. 5.1), the Order also calls out the need to mitigate the harms of AI for workers (see Sec. 6). Section 11 directs the Secretary of State to expand engagement with international allies to advance global technical standards for AI development, among other initiatives.

Key Takeaways

  • Comprehensive: Referencing topics including national security, housing, healthcare, education, employment, criminal justice, and consumer protection, the Order reflects the Biden Administration's broad and comprehensive vision for AI regulation across industries and sectors.
  • Pre-emptive rather than reactive: Congress has struggled to pass comprehensive federal privacy legislation, and the advancement of AI technology has made the call for it even stronger. Various provisions within the Order provide that its directives must be implemented over the range of 90 days to one year, making clear the Government's priority that AI governance be treated with urgency. While federal legislation remains elusive, federal agencies implementing the Order may begin shaping AI regulation in the meantime.
  • Clear calls to action: The Order provides relatively specific and granular directives and guidance which, if nothing else, clarify the Biden Administration's comprehensive policy views of AI regulation and attempt to provide an actionable roadmap for their implementation.
  • Mandates for U.S. government, guidance for industry and other nations: While the Order's directives focus primarily on the federal government in its own use, development, and guardrails of AI systems and technologies, the Order outlines an approach to AI governance that the Biden Administration envisions for industries and governments around the world.
  • Capitalizing on the benefits in addition to safeguarding against risks: In sharpest contrast to the regulatory approach contemplated in the EU's AI Act, which is organized principally around the levels of risk posed by AI technologies,10 the Executive Order places near-equal emphasis on the pressing need to (responsibly) develop and harness the potential benefits of AI as it does on the need to understand and mitigate novel risks.

1 Full text available at Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
2 In October 2022, the White House released a "Blueprint for an AI Bill of Rights," which outlined guiding principles for the responsible design and use of AI systems, and announced in the summer of 2023 that it had secured the voluntary commitments of leading AI companies to comply with AI safeguards set by the White House. See "FACT SHEET: Biden-⁠Harris Administration Secures Voluntary Commitments from Eight Additional Artificial Intelligence Companies to Manage the Risks Posed by AI" and "Voluntary AI Commitments".
3 See Sec. 1. The Order articulates the following key principles and priorities: (1) AI must be safe and secure; (2) To lead in AI, the U.S. must promote responsible innovation, competition, and collaboration; (3) Responsible development and use of AI require a commitment to supporting American workers; (4) AI policies must advance equity and civil rights; (5) The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected; (6) Privacy and civil liberties must be protected; (7) The U.S. federal government must manage the risks of its own use of AI; (8) The U.S. federal government should exercise global leadership in societal, economic, and technological progress. Id.
4 See Sec. 4.2(a)(i)(C).
5 See Sec. 4.2(a)(ii).
6 These are defined as: "(i) any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and (ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI." Sec. 4.2(b)(i)-(ii).
7 The White House, "FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence," October 30, 2023.
8 See Sec. 9(a)(i).
9 Id.
10 See European Parliament, "EU AI Act: first regulation on artificial intelligence," updated June 14, 2023.

White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.

This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.

© 2023 White & Case LLP

Top