White & Case Tech Newsflash

Welcome to the third edition of our AI Legal News Summer Roundup! After five class actions were filed between June 28 and July 11 (as reported on in our first edition of this series), on July 21, another class action lawsuit was filed against Microsoft, this time concerning personal data collected by Microsoft using the "Edge" web browser (see Update 1) and on July 28, we also saw what we believe is one of the first U.S. patent infringement cases directly targeting technologies that service large language models and generative AI, when Korean company, FriendliAI, Inc. sued NY-based Hugging Face, Inc. (see Update 2). 

"Urgent" calls for government regulation persist (including at the U.S. Senate Committee on Judiciary hearing on "Oversight of A.I.: Principles for Regulation" (see Update 8), from the Federal Reserve (see Update 6), in the U.K. House of Lords (see Update 12), from the UK Deputy Prime Minister's office (see Update 13) and from the Australian Communications and Media Authority in connection with their voluntary code on disinformation and misinformation (see Update 18)). In the absence of more comprehensive regulation on AI, in the U.S., bills are being introduced to establish organizations that will assess the impact of new technologies and respond to perceived threats. For example, Congress proposes the creation of an AI research infrastructure that provides greater access to development tools (see Update 3), Massachusetts proposes establishing a research commission to review use of AI decision-making by the State executive branch (see Update 4) and Illinois has added "deepfake" pornography as a cause of action to an existing state law against revenge pornography (see Update 5). Governments also demonstrate a willingness to consult and collaborate in pursuing safety and ethical guidelines for AI at bilateral and multilateral levels, e.g., Singapore and the UK, Japan and the EU (as described in our first edition), members of the U.N. Security Council (as described in our second edition) and, more recently, the Australian and American governments, which issued a joint statement on July 29 expressing a shared commitment to the safe and ethical development of critical technologies, including AI and quantum computing.

Industry is also coordinating to address concerns and to influence the dialogue on the regulation of AI, with prominent companies in the AI field, namely Google, OpenAI, Microsoft and Anthropic, forming the Frontier Model Forum to ensure responsible development of certain AI models (see Update 7), and GitHub, Hugging Face, Creative Commons and others submitting a paper on open-source development of AI models to the EU Parliament (see Update 9).

Finally, similar to our observations in our second edition of this series, privacy and personal data rights continue to be central to developments in this field. For example, in South Korea, the Personal Information Protection Commission released guidelines on the safe use of AI, and fined OpenAI 3.6 million won (US$3,000) for failing to comply with the notification requirements for a data breach incident under the Personal Information Protection Act, signalling heightened regulatory scrutiny for such generative AI companies (see Update 19). 

In this third issue, we highlight 19 key developments we've identified from the United States, United Kingdom, Europe and APAC between July 21 and August 4, 2023:

1. United States: Microsoft Edge users in California and Washington file a class action lawsuit

On July 21, three California and Washington plaintiffs, including one minor, filed Saeedy et al v. Microsoft Corporation, No. 2:23-cv-01104 (W.D. Wash. Jul 21, 2023), a class action law suit against Microsoft alleging Microsoft programmed its "Edge" internet browser to intercept, collect and send to itself private data relating to the Plaintiffs' internet browsing activities, internet searches and online shopping behaviors. The Plaintiffs claim that Microsoft used their data to develop its AI and machine-learning systems, provide targeted advertising and improve Microsoft's software, services and devices. The Plaintiffs asserted 13 causes of action, including: (1) intercepting Plaintiffs' electronic content and using it for Defendant's gain in violation of the Electronic Communications Privacy Act; (2) intentionally accessing Plaintiffs' protected computers and obtaining information in excess of its authority in violation of the Computer Fraud and Abuse Act; (3) intercepting, collecting and using the California Plaintiffs' private data in violation of California's constitutional right to privacy and other state privacy acts; and (4) unfair or deceptive acts or business practices, among other things. The Plaintiffs seek injunctive relief as well as restitution, disgorgement and statutory, actual and punitive damages.

2. United States: FriendliAI Inc. sues Hugging Face, Inc. alleging patent infringement related to AI technology

On July 28, in FriendliAI Inc. v. Hugging Face, Inc., No. 1:23-cv-00816 (D. Del. Jul 28, 2023), FriendliAI Inc. filed a complaint against Hugging Face, Inc. alleging the Defendant's inference server (aka Text Generation Inference) for its Large Language Model utilizes continuous or dynamic batching features that infringe the Plaintiff's patent (U.S. Patent No. 11,442,775). The '775 patent includes a claim for a solution (aka PeriFlow or Orca) that serves large-scale AI transformer models by providing "batching with iteration-level scheduling," which results in "increase in throughput and decrease in latency." The Plaintiff is seeking injunctive relief or a compulsory ongoing license fee, as well as statutory damages and actual damages, among additional forms of relief. This is one of the first U.S. patent infringement cases that directly target technologies that service large language models and generative AI. 

3. United States: Congressional AI Caucus proposes creation of National Artificial Intelligence Research Resource

On July 28, the bi-partisan co-chairs of the Congressional AI Caucus (Anna Eshoo (D-CA), Michael McCaul (R-TX), Don Beyer (D-VA) and Jay Obernolte (R-CA)) introduced H.R.5077, a federal bill to establish a National Artificial Intelligence Research Resource that will provide AI development resources to researchers and students in the U.S. The bill aims to spur American innovation and collaboration through a shared, central access point to computational tools and educational information. The bill was referred to the House Science, Space, and Technology committee for further consideration. On July 27, Senators Martin Henrich (D-NM), Todd Young (R-IN), Cory Booker (D-NJ) and Mike Rounds (R-SD) introduced S.2714, a companion bill in the Senate, and referred it to the Senate Commerce, Science, and Transportation committee. As of the date of this publication, the text of each bill is not yet available. 

4. United States: Massachusetts House Committee advances commission on automated decision-making by government

On August 3, the Massachusetts Joint Committee on Advanced Information Technology, the Internet and Cybersecurity (JCAITIC) introduced H.4024, a state bill establishing a commission on automated decision-making by government. If the bill is enacted, the commission will be responsible for: (1) reviewing the use of algorithms or AI as a replacement for human decision-making by executive agencies; (2) making recommendations to the legislature regarding regulation, limits, standards and safeguards for such automated decision systems; and (3) submitting a report and recommendations to the Massachusetts governor, the House, the Senate and the JCAITIC by December 31, 2023, among other things.

5. United States: AI generated nonconsensual “deepfake” pornography added as cause of action to existing Illinois law

On July 28, Illinois Governor JB Pritzker signed H.B.2123, an amendment to Illinois's Civil Remedies for Nonconsensual Dissemination of Private Sexual Images Act that provides a new civil cause of action for people who are depicted in intentionally digitally altered sexual images that are disseminated without their consent. Plaintiffs can seek damages as well as equitable relief ordering a defendant to cease display or disclosure of the deepfake pornography. The bill was introduced in response to growing concerns about advancements in AI technology (particularly AI-generated digital forgeries or "deepfakes"), and the potential for such technology to be used for sexual exploitation and harassment. The amendment will take effect on January 1, 2024.

6. United States: Federal Reserve warns of emerging AI technologies and cybersecurity risks to the U.S. financial system

On August 2, the Federal Reserve released its annual "Cybersecurity and Financial System Resilience Report," which identified AI (including generative AI) and quantum computing as emerging threats to the security and integrity of the nation's financial system. The report warns that malicious actors may use AI to automate and increase the occurrence and impact of cyber incidents, e.g., to perform social engineering, email phishing and text messaging smishing attacks. Additionally, the report notes that malicious actors may use quantum computing to more easily avoid detection. The Federal Reserve highlights the importance of "collective actions across government and strong collaboration with the private sector" to mitigate cybersecurity risks and maintain a resilient financial sector.

7. United States: Google, OpenAI, Microsoft and Anthropic create industry group to ensure frontier AI models are developed safely

On July 26, Google, Open AI, Microsoft and Anthropic announced the formation of the Frontier Model Forum, a new industry body aimed at providing benchmarks and putting safeguards in place for frontier AI models (i.e., large-scale, machine-learning models that exceed the capabilities currently present in the most advanced existing models and can perform a wide variety of tasks). The Forum's core objectives include: (1) advancing AI safety research; (2) identifying best practices for responsible development of frontier models; (3) collaborating with key policy and decision makers (including to support and contribute to existing government and multilateral initiatives); and (4) supporting the development of applications that address real-world issues. The Forum calls on other technology companies that meet its criteria to become members and help further the Forum's goals.

8. United States: Anthropic's CEO urges senate to pass legislation in response to the rapid pace of AI innovation

On July 25, the Subcommittee on Privacy, Technology and the Law of the U.S. Senate Committee on the Judiciary held a hearing titled "Oversight of A.I.: Principles for Regulation." Anthropic CEO, Dario Amodei, was one of three witnesses that presented. In his Written Testimony, Amodei warned about AI's rapid advancement and the risks of AI systems being misused to cause large-scale destruction, particularly in the domain of biology in the next two to three years. Amodei called for an "urgent policy response," and recommended three broad classes of policies: (1) securing the AI supply chain within the U.S.; (2) implementing a "testing and auditing regime" for new and more powerful AI models; and (3) funding measurement of AI system behaviors and research on how to undertake such measurements (especially given that the science of testing and auditing AI systems is still underdeveloped). The urgent need for regulatory action (beyond voluntary commitments) was also echoed in U.C. Berkeley Professor of Computer Science Stuart Russel's Written Testimony and University of Montreal Professor of Computer Science Yoshua Bengio's Written Testimony. These hearings took place days after prominent AI companies made voluntary commitments to the White House to implement measures aimed at increasing AI technology safety (as described in Update 1 of our second edition). 

9. United States: The Associated Press (AP) secures “most favored nation” protections; other publishers in talks with AI platforms

In our second edition, we noted that the AP agreed to license its news archive to OpenAI. On July 28, The Wall Street Journal1 reported that the AP included a "most favored nation" clause in this agreement, which would prohibit OpenAI from negotiating better terms with other publishers for the licensing of news content. This reflects the inherent difficulty of placing a value on the content used for AI training and that other news publishers are reportedly seeking payments for the use of their content.2 OpenAI CEO Sam Altman has indicated a willingness to "pay a lot for very high-quality data in certain domains," such as the sciences, and has been reported to have made other deals, including licensing data from Shutterstock.3

10. United States: The U.S. Securities and Exchange Commission (SEC) proposed new rules guiding the use of AI and predictive data analytics by broker-dealers and investment advisers

On July 26, the SEC proposed new rules requiring broker-dealers and investment advisers to address conflicts of interest with respect to their use (or potential use) of AI and predictive data analytics when interacting with investors. The rules include: (1) identifying conflicts; (2) determining whether any such conflict results in placing a firm's or its associated person's interests ahead of investors' interests; and (3) eliminating or neutralizing the effect of such conflicts. Many firms currently use AI-based technologies to optimize for, predict or guide investment-related behaviors or outcomes. The SEC expressed concerns that technology used in this manner could place firms' interests ahead of investors' interest, resulting in harm to clients. The SEC stated that it will review public comments on the proposal before voting on the final version in 2024.

11. European Union: Github and other companies call upon EU policymakers to better support the open-source development of AI models through the EU AI Act

According to a report from The Verge,4 a group of companies, including GitHub, Hugging Face, Creative Commons and others have submitted a paper titled "Supporting Open Source and Open Science in the EU AI Act" with various suggestions to the EU Parliament in light of the EU AI Act's finalization, calling for more support for open-source development of AI models. The paper includes recommendations for "clearer definitions of AI components, clarifying that hobbyists and researchers working on open-source models are not commercially benefiting from AI, allowing limited real-world testing for AI projects, and setting proportional requirements for different foundation models." The paper also warns that classifying certain AI models as high-risk, regardless of the size of the developer, could put an excessive financial strain on smaller developers, for example, if they are required to hire third-party auditors. Further, the paper notes that the EU AI Act could "set a global precedent in regulating AI to address its risks while encouraging innovation," but emphasizes that the Act needs to be "supporting the blossoming open ecosystem approach to AI."

12. United Kingdom: House of Lords debate on advanced AI

On July 24, the House of Lords held a general debate on the ongoing development of advanced AI, the associated risks and potential regulatory approaches—including whether the government's White Paper on AI regulation, published in March, had already been overtaken by recent events such that new legislation was required. Viscount Camrose (Conservative), Minister for AI and Intellectual Property, responded for the government. He highlighted various government actions being conducted in relation to AI (including considering legislation currently before Parliament, e.g., the Online Safety Bill and the Data Protection and Digital Information Bill), stating "The AI regulation White Paper, published this March, set out our proportionate, outcomes-focused and adaptable regulatory framework…We will ensure that there are protections for the public without holding businesses back from using AI technology to deliver stronger economic growth." He also noted the "importance of regulator upskilling and co-ordination" and re-affirmed the government's intention to "support [the UK's] existing regulators to apply the [White Paper on AI regulation] principles, using their sectoral expertise…[to] build best practice." However, Viscount Camrose also confirmed that the government viewed the White Paper on AI regulation as a "first step in addressing the risks and opportunities presented by AI" and was "unafraid to take further steps if needed to ensure safe and responsible AI innovation." His comments highlight the challenges associated with regulating a rapidly advancing technology across multiple industry sectors and balancing the freedom to innovate with safety and security in AI development.

13. United Kingdom: Government flags AI as chronic risk to the UK in latest National Risk Register

On August 3, the UK Deputy Prime Minister Oliver Dowden unveiled the latest National Risk Register (NRR), which has been released every two years since 2008. The latest edition highlights 89 different risks within nine risk themes that could have a "substantial impact on the UK's safety, security and/or critical systems at a national level." For the first time, AI is flagged as one of four chronic risks, alongside climate change, antimicrobial resistance and serious and organized crime. The latest NRR states that "[a]dvances in AI systems and their capabilities have a number of implications spanning chronic and acute risks; for example, it could cause an increase in harmful misinformation and disinformation, or if handled improperly, reduce economic competitiveness." In addition to highlighting the UK's National AI Strategy (published in 2021) and the UK Government's White Paper on AI regulation (see above in Update 12), the NRR points to the UK Government's commitment to host "the first global summit on AI Safety," which is said to "bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor risks from AI."

14. Germany: Government is not currently planning its own national AI regulation and instead focusing on EU and G7 efforts

According to a report by Table.Media,5 Anna Christmann (Green Party), the Commissioner of the German Federal Ministry of Economics for the Digital Economy and Startups, stated: "We are not planning a national AI regulation in Germany." Christmann emphasizes that "there is a danger that Europe will become unattractive for AI developers" if developments are discouraged at an early stage; however, Christmann also notes that fundamental rights would have to be sufficiently taken into account. A spokesperson for the Federal Ministry of Digital Affairs and Transport (BMDV) also spoke out against a unilateral approach on AI regulation by Germany, according to an article of the Frankfurter Rundschau.6 The spokesperson stated: "We want to support AI developers in our country by simplifying access to data and providing clarity on future standards" and "purely national rules are counterproductive here." Instead, the German government is involved in international regulatory processes, namely the EU AI Act and the G7 Hiroshima process on generative AI. According to the BMDV spokesperson, Germany is "promoting clear transparency rules at the G7 level that leave room for innovation."

15. France: The French data protection authority launches a call for contributions on the creation of AI databases

The Commision Nationale de l'Informatique et des Libertés (CNIL), France's data protection authority, published an AI action plan in May 2023 to address key questions concerning the use and protection of personal data and the application of the General Data Protection Regulation (GDPR) in AI systems. On July 27, the CNIL published a five-page questionnaire that invites stakeholder contributions on the following key areas: (1) the use and purpose of processing personal data in AI (particularly generative AI); (2) methods of data selection, cleaning and minimization; (3) implementation of the GDPR's "data protection by design" and "data protection by default" principles in AI systems; and (4) legal bases and criteria for determining whether personal data processing serves a "legitimate interest" in the creation of databases by AI systems and/or for training databases used by AI as required under the GDPR. The CNIL will consider these contributions in preparing publications (e.g., industry guidelines and recommendations) on AI.

16. France: French data protection agency launches a “sandbox” dedicated to AI projects for the benefit of public services

On July 21, the CNIL launched the third edition of its "sandbox" program, dedicated to AI projects for the benefit of public services. While not a regulatory sandbox, the CNIL's sandbox will provide an environment to help innovators deal with emerging issues relating to personal data regulations and identify and implement solutions in compliance with applicable law. The CNIL chose AI as this year's theme because it views AI to be particularly useful in public sector activities (including improving the quality of public service, facilitating the work of public agents and having a direct impact on citizens' daily experiences). Applications for the sandbox are open until September 30, 2023. At the end of the selection process, the CNIL will choose three projects to assist over three phases: (1) support phase (identifying the issues raised by the project and possible pragmatic solutions); (2) implementation phase (organization will implement the CNIL's recommendations); and (3) feedback phase (the CNIL's publication of a report summarizing its work and recommendations for each project).

17. Australia: Australia Minister for Industry and Science states that self-regulation is insufficient for AI companies

In June 2023, the Australian government began seeking submissions on its consultation paper "Safe and responsible AI in Australia" (discussed in Update 6 of our first edition). Submissions were sought on potential gaps in Australia's existing domestic governance landscape and any possible additional governance mechanisms to support the development and adoption of AI. While submissions closed on August 4, 2023, Australia's Minister for Industry and Science, Ed Husic, recently flagged, in an interview with the Guardian Australia, that "the era of self-regulation is over" and that "appropriate rules and credentials [should] apply to high-risk applications of AI." This suggests that Australia may adopt direct government regulation over generative AI services, rather than allow self-regulation, as may be the case in some overseas jurisdictions.

18. Australia: Australian Communications and Media Authority (ACMA) releases disinformation report

On July 25, ACMA released a report outlining its views on digital platforms' efforts under the Australian Code of Practice on Disinformation and Misinformation (Code), a voluntary standard for industry self-regulation. The Code currently has eight signatories, including Adobe, Google, Meta, Microsoft and Redbubble. The report was produced as part of ACMA's ongoing oversight role over the Code and its operation. While ACMA notes that the revised version of the Code, which commenced in December 2022, addressed "some pressing issues," it outlines three key areas for improvement. First, industry needs to take further steps to review the scope of the Code and the ability of the Code to adapt quickly to technology and services changes. ACMA notes the exponential growth in generative AI technologies since the revised Code commenced, and the increased risk of generative AI being used to produce and disseminate disinformation and misinformation at scale. Second, ACMA believes reporting by signatories must improve to enable an assessment of signatory's progress to achieve the Code's objectives and outcomes. Third, ACMA reports there remains an urgent need to improve the level of transparency about what measures digital platforms are taking to comply with the Code and the effectiveness of those measures. The release of the report is timely and may inform public consultation on the exposure draft of the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023, which is open for comment until August 20, 2023.

19. South Korea: South Korea's privacy regulator has released guidelines on the safe use of artificial intelligence and imposed administrative sanctions on OpenAI

On August 3, 2023, the Personal Information and Protection Committee (PIPC) released the "Policy Direction for Safe use of Personal Information in the Age of Artificial Intelligence" (Policy Direction). The Policy Direction signals South Korea's purposeful adoption of a principles-based approach to regulation of generative AI services that is intended to consider the rapidly evolving nature of generative AI. In seeking to minimize the risk of privacy violations, the Policy Direction sets out principles on how the current Personal Information Protection Act applies to all phases of AI development, including data collection and AI learning. The Policy Direction also announces that an AI Privacy Team will be established by the PIPC in October to provide consultation services to businesses on the legality of using personal information in AI development. The PIPC indicated plans to provide further guidelines on the specific use of various data sources in AI, including biometric data, publicly available data and data gathered from mobile imaging equipment. Finally, the PIPC intends to strengthen global cooperation in developing norms for AI development. This follows PIPC fining OpenAI 3.6 million won (US$3,000) on July 27 for failing to comply with the notification requirements regarding a data breach incident under the Personal Information Protection Act. The data breach, which occurred in March 2023, related to the ChatGPT Plus payment system and resulted in the personal details of some users being exposed to simultaneously active users. While modest in value, this fine illustrates the heightened regulatory scrutiny that generative AI companies are facing from the South Korean privacy regulatory body. The PIPC also revealed it will be engaging in fact-gathering and monitoring missions on some of the major AI services developed and deployed in and out of Korea, including ChatGPT, with the aim of minimizing the risks to data privacy.

Charlie Hacker (White & Case, Graduate, Sydney), Erica Fox (White & Case, Summer Associate, New York), Avi Tessone (White & Case, Summer Associate, New York), Mick Li (White & Case, Summer Associate, Silicon Valley), Emma Hallab (White & Case, Vacation Clerk, Sydney), and Timo Gaudszun (White & Case, Legal Intern, Berlin) contributed to the development of this publication.

1 As Publishers Seek AI Payments, AP Gets a First-Mover Safeguard, The Wall Street Journal (July 28, 2023).
2 Ibid.
3 Publishers Prepare for Showdown With Microsoft, Google Over AI Tools, The Wall Street Journal (March 27, 2023)
4 Github and others call for more open-source support in EU AI law, The Verge (July 26, 2023).
5 KI-Regulierung: Deutschland setzt auf die EU und die G7, Table.Media (July 30, 2023).
6 KI-Regulierung: Deutschland setzt auf die EU und die G7, Frankfurter Rundschau (August 6, 2023).

White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.

This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.

© 2023 White & Case LLP

Top