
As artificial intelligence ("AI") systems and their use continue to advance, the regulatory landscape in the United States is rapidly evolving, but not in a uniform way. Unlike the European Union's comprehensive approach with the EU AI Act, the United States has not enacted a national AI law. Instead, the regulatory framework is emerging at the state level, much like privacy regulations, with an increasing number of states introducing their own AI laws and legislative proposals. This results in a patchwork of rules and requirements that creates a complex and fragmented compliance environment for businesses deploying and employing AI and automated decision-making systems.
In our previous client alert on state AI regulations, we analyzed state and local AI regulations in Colorado, Illinois, and New York, and how state privacy laws provide additional protections and safeguards against automated decision-making systems.
In this client alert, we provide a time-stamped overview of newly enacted state laws aimed at regulating AI and automated decision-making systems. We highlight the key provisions of these regulations and offer practical insights for businesses using AI.
California – Despite concerns raised by Governor Gavin Newsom, on May 1, 2025, the California Privacy Protection Agency ("CPPA") unanimously voted to initiate a public comment period for its proposed regulations on Cybersecurity Audits, Risk Assessments, and Automated Decision-Making Technology ("ADMT") (the "CPPA Regulations"), continuing a long attempt to update these regulations. The comment period will remain open until June 2, 2025. Notably, the current draft CPPA Regulations reflect substantial revisions from the version published in November 2024, following public feedback received during the formal rulemaking process. Key revisions include:
- Narrowing the definition of ADMT and significant decision: The CPPA Regulations impose certain obligations, including opt-out rights, on businesses that use ADMT to make significant decisions about consumers.
- The current CPPA Regulations significantly narrow the definition of ADMT, which now applies to technologies that process personal information and use computation to "replace or substantially replace" human decision-making, removing the broader "substantially facilitates" language from the earlier draft. This change has important practical implications. For instance, if a business uses automated tools to assist in decision-making but retains meaningful human involvement in the final decision, that use will likely fall outside the scope of the CPPA; and therefore, not trigger obligations to provide opt-out rights.
- Similarly, under the current CPPA regulations, a "significant decision" is defined as one that results "in the provision or denial of financial or lending services, housing, educational enrollment or opportunities, employment or independent contracting opportunities or compensation, or healthcare services." This revised definition narrows the previous scope by removing references to decisions affecting "access to" these services.
- Narrowing the scope of ADMT and risk assessment obligations – Under the current CPPA Regulations, businesses that engage in profiling in employment or educational contexts, or that process personal information to train ADMT, are no longer required to comply with ADMT obligations. However, they are still required to complete a risk assessment. With respect to public profiling, the current CPPA Regulations now require risk assessments only when businesses profile consumers based on their presence in sensitive locations, such as educational institutions, pharmacies, or housing shelters. Finally, under the revised rules, profiling for behavioral advertising (e.g., first-party advertising) no longer triggers risk assessment or ADMT compliance requirements.
- Pre-use notices for ADMT – The current CPPA Regulations clarify that businesses using ADMT may include the required pre-use notice within their existing notices at the time of collection.
- Elimination of abridged risk assessment submission – Under the current CPPA regulations, businesses are no longer required to affirmatively share abridged risk assessments with the CPPA. Nevertheless, businesses are still required to submit risk assessments to the Agency for any year during which the business conducted the risk assessments. The risk assessments conducted in 2026 and 2027 must be submitted no later than April 1, 2028.
Arkansas – Governor Sarah Huckabee recently signed two AI regulations: (i) HB 1958, which requires public entities to develop a comprehensive policy regarding the authorized use of AI and ADMT, and (ii) HB 1876, which establishes ownership rights over content generated by generative AI. Specifically, individuals who provide input or directives to a generative AI tool will own the generated content, provided it does not infringe on any copyrights or intellectual property rights. Besides, individuals (excluding employees) who provide data for AI model training will own the resulting trained model, unless the training data was unlawfully obtained. Both regulations will take effect on August 3, 2025.
Kentucky – enacted SB 4, which directs the Commonwealth Office of Technology to create policy standards governing the use of AI.
Maryland – enacted HB 956 that establishes a working group tasked with studying the private sector use of AI and making recommendations to the General Assembly on AI regulation and policy standards.
Montana – SB 212, signed into law by Governor Gianforte, provides individuals the right to compute that any government restrictions on private ownership or use of computational resources must be limited and narrowly tailored to fulfill a compelling government interest. The law also requires critical infrastructure facilities, whether fully or partially controlled by AI systems, to develop a risk management policy that considers national or international AI risk management frameworks.
Utah – Perhaps most significantly, Governor Spencer signed several AI regulations, including SB 332 and SB 226, which amend the existing AI Policy Act, which requires entities offering consumer-facing generative AI services in regulated professions to disclose when individuals are interacting with AI rather than a human. SB 332 extends the repeal date of the AI Policy Act to July 2027. SB 226 narrows the law's application by prescribing that disclosure is only required when directly asked by a consumer or supplier, or during a "high-risk" interaction (e.g., during the collection of health, financial, or biometric data). Finally, HB 452 introduces new regulations for AI-supported mental health chatbots in Utah, including a ban on advertising products or services during user interactions and a prohibition on sharing users' personal information.
West Virginia – enacted HB 3187, which creates a task force responsible for identifying economic opportunities related to AI and providing recommendations to the House of Delegates, Senate, and the Governor to develop best practices for AI use in the public sector and protect individual rights and consumer data.
Finally, it's important to note the emerging deregulatory trend for AI at the federal level. This shift is especially evident following President Trump's executive orders, rescinding President Biden's Executive Order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." In its place, the new policy, titled "Removing Barriers to American Leadership in AI," aims to sustain and enhance U.S. global dominance in AI to promote human flourishing, economic competitiveness, and national security. To advance this goal, an action plan will be developed and submitted to the president within 180 days. This deregulatory approach is also evident in the recently introduced House Budget Bill, which proposes to preempt and prohibit state enforcement of AI-related laws for the next ten years.
Burak Haylamaz (White & Case, Staff Attorney, Los Angeles) contributed to the development of this publication.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.
© 2025 White & Case LLP