Edition 4 of the AI Legal News Summer Roundup
Welcome to the fourth edition of our AI Summer Roundup!
11 min read
In this edition, key themes include creators and consumers seeking more control and protection over how their content is used to train AI models (whether under copyright law or privacy laws), and governments grappling with the delicate balance between overseeing and regulating AI development, and fostering innovation.
We report on the US copyright court decision confirming that copyright doesn't extend to AI-generated outputs "operating absent any guiding human hand" (see Update 1). In this edition, we also saw major international news organizations releasing an open letter advocating for stronger intellectual property safeguards with respect to AI, including potential licensing arrangements (see Update 6) and Zoom revising its terms to explicitly clarify that customer content will not be used in training AI models after some privacy concerns were raised by the public (see Update 7).
Finally, we also observe how governments worldwide are attempting to address AI's dual potential: China's new AI regulations came into force (see Update 9); South Korea launched an AI working group (see Update 10); and in the US, concerns over bias in algorithms were raised (see Update 3); the White House confirmed an AI executive order is forthcoming (though details are limited); and the Federal Election Commission sought public comment on a petition prohibiting the use of AI in campaign advertisements (see Update 5).
In this fourth issue, we highlight 10 key developments we've identified from the United States, Europe, and APAC between August 4 and August 18, 2023:
1. United States: Washington, DC federal court denies copyright protection for AI-generated art
On August 18, in Stephen Thaler v. Shira Perlmutter, No. 1:22-cv-01564 (D.D.C. August 18, 2023), the DC district court agreed with the US Copyright Office's (USCO) decision to deny computer scientist Stephen Thaler's copyright registration for artwork autonomously generated by AI in the absence of any human involvement. The court held that the "Register did not err in denying the copyright registration application," and "United States copyright law protects only works of human creation." While the court agreed that "[c]opyright is designed to adapt with the times," the court ultimately held that "[c]opyright has never stretched so far, however, as to protect works generated by new forms of technology operating absent any guiding human hand" and that "[h]uman authorship is the bedrock requirement of copyright." In looking forward, however, the court agreed that the "increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions regarding how much human input is necessary to qualify the user of an AI system as an "author" of a generated work," as well as how to assess the originality of AI-generated art trained on unknown existing copyrighted works.
2. United States: New York state proposes legislation concerning use of AI in employment decisions, while the US Equal Employment Opportunity Commission (EEOC) settles AI hiring discrimination lawsuit
On August 4, New York state senator Brad Hoylman-Sigal (D-NY) introduced legislation S7623 that aims to amend current labor laws making it unlawful to surveil employees residing in New York State via an electronic monitoring tool unless certain requirements are met and notice is provided to the employee. The bill would also completely ban certain uses of electronic monitoring tools, including "… tool[s] that incorporate facial recognition, gait, or emotion recognition technology," or employers "rely[ing] solely on employee data collected through electronic monitoring when making hiring, promotion, termination, disciplinary, or compensation decisions." This is another example of state legislators aiming to curb AI usage to limit bias or discrimination, and requiring data monitoring and auditing for those using AI tools.
Meanwhile, at the federal level, the EEOC clarified the plan they intend to enforce against companies that use AI to effect unlawful discrimination. In a joint notice of settlement issued on August 9, the EEOC announced they have settled the AI discrimination lawsuit against a tutoring company that allegedly discriminated against applicants based on their birth dates. The EEOC alleged that the tutoring company "violated federal law by programming [their] online recruitment software to automatically reject older applicants because of their age," and sought back pay for the denied job applicants. EEOC Chair Charlotte A. Burrows stated that discrimination based on age is unlawful, "even when technology automates the discrimination." Burrows further highlighted that the case was filed as part of the EEOC's recently launched "Artificial Intelligence and Algorithmic Fairness Initiative" to ensure the use of AI in employment decisions and the workplace lawfully satisfies requirements established by federal civil rights law.
3. United States: Prisma Labs avoids trial in privacy class action with arbitration clause
4. United States: Consumer Financial Protection Bureau (CFPB) proposes new regulation that will restrict data brokers' ability to track and sell personal information to fuel AI and targeted advertisements
On August 15, the CFPB announced its intention to crack down on the tracking and selling of personal data in a roundtable held at the White House, by extending the Fair Credit Reporting Act to encompass data brokers and similar businesses that track, sell and otherwise profit from Americans' personal data, usually without their consent or knowledge. The proposal seeks to ban the sale of consumer data, specifically focusing on credit-header data. This data includes the segment of a credit report containing personal identification details, such as a person's name, address and Social Security number. The agency is particularly focused on curtailing the use of this data for the purposes of targeted digital advertising and training AI (which includes technologies that make automated decisions, such as AI chatbots that can spontaneously address consumer inquiries).
5. United States: Federal Election Commission (FEC) seeks public comment regarding use of AI in campaign advertisements by October 16, 2023
On August 16, the FEC published a notification seeking public comment on a petition for rulemaking regarding the use of AI in campaign advertisements. The petitioner, Public Citizen, asked the FEC to amend its regulation at 11 CFR 110.16 that prohibits a candidate or their agent from fraudulently misrepresenting other candidates or political parties so that the regulation is clear that the prohibition applies to use of AI in campaign advertisements. The public comments, which can be commented electronically via the FEC's website are due by October 16, 2023.
6. Global: Major international news organizations release open letter calling for regulatory and industry action to enhance transparency and intellectual property protection in AI
In an open letter published on August 9, a group of major international news organizations, including Agence France-Presse, the European Pressphoto Agency, Getty Images, News Media Alliance, The Associated Press, and The Authors Guild, called for enhanced transparency of AI training sets and better protection of copyrighted material being used to train AI models. The news organizations state: "Generative AI and large language models […] disseminate [proprietary media] content and information to their users, often without any consideration of, remuneration to, or attribution to the original creators. Such practices undermine the media industry's core business models." Specifically, the signatories advocate for the following regulatory and industry action in the open letter: (1) "Transparency as to the makeup of all training sets used to create AI models;" (2) "Consent of intellectual property rights holders to the use and copying of their content in training data and outputs;" (3) "Enabling media companies to collectively negotiate with AI model operators and developers regarding the terms of the operators' access to and use of their intellectual property;" (4) "Requiring generative AI models and users to clearly, specifically, and consistently identify their outputs and interactions as including AI-generated content;" and (5) "Requiring generative AI model providers to take steps to eliminate bias in and misinformation from their services." This letter is consistent with news media organizations (like The Associated Press and The New York Times) requiring online platforms to license their content.
7. Global: Zoom releases statement and amends Terms of Service (Terms) following discussions about possible implications under EU privacy law over potential use of customer data to train AI models
Zoom issued a statement and updated its Terms to clarify that customer content is not used to train Zoom's or third-party AI models. Zoom originally published a statement on August 7 and then updated it on August 11 to reflect the latest version of the Terms (which were revised twice). Both the Terms and statement now state: "Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like customer content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models." On August 6, a Stack Diary article1 first pointed out changes to Zoom's Terms from March that would have potentially given Zoom broad control over customer data to train AI models.
8. Germany: Federal Commissioner for Data Protection and Freedom of Information calls for rules on using publicly available data to train AI models
In an August 20 interview on the German radio program "Deutschlandfunk,"2 Germany's Federal Commissioner for Data Protection and Freedom of Information Ulrich Kelber called for rules on the use of publicly available data to train AI models and emphasized the need for protection of personal data and of the rights under the EU's General Data Protection Regulation (GDPR). Kelber, who oversees data protection at federal public agencies and certain companies in Germany, commented on the need for: (a) rules governing the purposes for which publicly available data may be processed; (b) technical regulations to block systems from using data; (c) requirements to clearly pseudonymize or anonymize such data before it is used as training data; and (d) rules mandating that AI developers give AI chatbot users the ability to exclude personal data from being used by the AI software.
9. China: Cyberspace Administration of China implements measures for the management of generative AI services
In our first edition, we reported on the Cyberspace Administration of China's interim measures for the management of generative AI services that were released on July 13 (Interim Measures). The Interim Measures required generative AI services to respect social morality and ethics, comply with existing laws and regulations, and adhere to the core values of socialism and prevention of discrimination. The Interim Measures entered into force on August 15.
10. South Korea: South Korea launches a working group to facilitate discussions regarding the regulation of AI
The South Korean Ministry of Science and Information and Communication Technology (ICT) has formed an expert group composed of four subcommittees and 41 members to discuss the regulation of AI. The group will examine how regulations can promote the development and utilization of AI, particularly with respect to personal information, copyright and information security. Further discussions will focus on how to establish trust in the use of AI and, in doing so, how to promote the uptake of AI in industries in which it is currently underutilized. Finally, the group will consider whether reforms are needed in relation to the copyright and compensation for work created by AI. These discussions will inform South Korea's AI legislative roadmap, due to be released in late 2023.
Agathe Malphettes (Counsel, Paris), Caroline Lyannaz (Counsel, Paris), Katarina Varriale (Associate, New York), Nashel Jung (Associate, New York), Katharine Pearce (Associate, New York), Sara Tadayyon (Associate, New York), Felix Aspin (Associate, London), Louise Mouclier (Associate, Paris), Laura Tuszynski (Associate, Paris), Rachael Stowasser (Associate, Sydney), Charlie Hacker (Graduate, Sydney), and Timo Gaudszun (Legal Intern, Berlin) contributed to the development of this publication.
1 Alex Ivanovs, Zoom's Updated Terms of Service Permit Training AI on User Content Without Opt-Out, Stack Diary, August 6, 2023.
2 Johannes Kuhn, Interview der Woche, Datenschutz-Beauftragter Kelber sieht Widerspruchslösung kritisch, Deutschlandfunk, August 20, 2023.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.
© 2023 White & Case LLP