African Union
The African Union's Continental AI Strategy sets the stage for a unified approach to AI governance across the continent.
Artificial intelligence (AI) has made enormous strides in recent years and has increasingly moved into the public consciousness.
Subscribe
We encourage you to subscribe to receive AI-related updates.
Explore Trendscape Our take on the interconnected global trends that are shaping the business climate for our clients.
Increases in computational power, coupled with advances in machine learning, have fueled the rapid rise of AI. This has brought enormous opportunities, as new AI applications have given rise to new ways of doing business. It has also brought potential risks, from unintended impacts on individuals (e.g., AI errors harming an individual's credit score or public reputation) to the risk of misuse of AI by malicious third parties (e.g., by manipulating AI systems to produce inaccurate or misleading output, or by using AI to create deepfakes).
Governments and regulatory bodies around the world have had to act quickly to try to ensure that their regulatory frameworks do not become obsolete. In addition, international organizations such as the G7, the UN, the Council of Europe and the OECD have responded to this technological shift by issuing their own AI frameworks. But they are all scrambling to stay abreast of technological developments, and already there are signs that emerging efforts to regulate AI will struggle to keep pace. In an effort to introduce some degree of international consensus, the UK government organized the first global AI Safety Summit in November 2023, with the aim of encouraging the safe and responsible development of AI around the world. The EU is also implementing the first comprehensive horizontal legal framework for the regulation of AI systems across EU Member States (the EU AI Act is addressed in more detail here: AI watch: Global regulatory tracker - European Union, and you can read our EU AI Act Handbook here).
Most jurisdictions have sought to strike a balance between encouraging AI innovation and investment, while at the same time attempting to create rules to protect against possible harms. However, jurisdictions around the world have taken substantially different approaches to achieving these goals, which has in turn increased the risk that businesses face from a fragmented and inconsistent AI regulatory environment. Nevertheless, certain trends are becoming clearer at this stage:
Businesses in almost all sectors need to keep a close eye on these developments to ensure that they are aware of the AI regulations and forthcoming trends, in order to identify new opportunities and new potential business risks. But even at this early stage, the inconsistent approaches each jurisdiction has taken to the core questions of how to regulate AI is clear. As a result, it appears that international businesses may face substantially different AI regulatory compliance challenges in different parts of the world. To that end, this AI Tracker is designed to provide businesses with an understanding of the state of play of AI regulations in the core markets in which they operate. It provides analysis of the approach that each jurisdiction has taken to AI regulation and provides helpful commentary on the likely direction of travel.
Because global AI regulations remain in a constant state of flux, this AI Tracker will develop over time, adding updates and new jurisdictions when appropriate. Stay tuned, as we continue to provide insights to help businesses navigate these ever-evolving issues.
The African Union's Continental AI Strategy sets the stage for a unified approach to AI governance across the continent.
Voluntary AI Ethics Principles guide responsible AI development in Australia, with potential reforms under consideration.
The enactment of Brazil's proposed AI Regulation remains uncertain with compliance requirements pending review.
AIDA expected to regulate AI at the federal level in Canada but provincial legislatures have yet to be introduced.
The Interim AI Measures is China's first specific, administrative regulation on the management of generative AI services.
The Council of Europe is developing a new Convention on AI to safeguard human rights, democracy, and the rule of law in the digital space covering governance, accountability and risk assessment.
The successful implementation of the EU AI Act into national law is the primary focus for the Czech Republic, with its National AI Strategy being the main policy document.
The EU introduces the pioneering EU AI Act, aiming to become a global hub for human-centric, trustworthy AI.
France actively participates in international efforts and proposes sector-specific laws.
The G7's AI regulations mandate Member States' compliance with international human rights law and relevant international frameworks.
Germany evaluates AI-specific legislation needs and actively engages in international initiatives.
Hong Kong lacks comprehensive AI legislative framework but is developing sector-specific guidelines and regulations, and investing in AI.
National frameworks inform India’s approach to AI regulation, with sector-specific initiatives in finance and health sectors.
Israel promotes responsible AI innovation through policy and sector-specific guidelines to address core issues and ethical principles.
Japan adopts a soft law approach to AI governance but lawmakers advance proposal for a hard law approach for certain harms.
Kenya's National AI Strategy and Code of Practice expected to set foundation of AI regulation once finalized.
Nigeria's draft National AI Policy underway and will pave the way for a comprehensive national AI strategy.
Position paper informs Norwegian approach to AI, with sector-specific legislative amendments to regulate developments in AI.
The OECD's AI recommendations encourage Member States to uphold principles of trustworthy AI.
Saudi Arabia is yet to enact AI Regulations, relying on guidelines to establish practice standards and general principles.
Singapore's AI frameworks guide AI ethical and governance principles, with existing sector-specific regulations addressing AI risks.
South Africa is yet to announce any AI regulation proposals but is in the process of obtaining inputs for a draft National AI plan.
South Korea's AI Act to act as a consolidated body of law governing AI once approved by the National Assembly.
Spain creates Europe's first AI supervisory agency and actively participates in EU AI Act negotiations.
Switzerland's National AI Strategy sets out guidelines for the use of AI, and aims to finalize an AI regulatory proposal in 2025.
Draft laws and guidelines are under consideration in Taiwan, with sector-specific initiatives already in place.
Turkey has published multiple guidelines on the use of AI in various sectors, with a bill for AI regulation now in the legislative process.
Mainland UAE has published an array of decrees and guidelines regarding regulation of AI, while the ADGM and DIFC free zones each rely on amendments to existing data protection laws to regulate AI.
The UK prioritizes a flexible framework over comprehensive regulation and emphasizes sector-specific laws.
The UN's new draft resolution on AI encourages Member States to implement national regulatory and governance approaches for a global consensus on safe, secure and trustworthy AI systems.
The US relies on existing federal laws and guidelines to regulate AI but aims to introduce AI legislation and a federal regulation authority.
Japan adopts a soft law approach to AI governance but lawmakers advance proposal for a hard law approach for certain harms.
On May 28, 2025, Japan's Parliament enacted a bill to establish a new law centered on promoting AI, titled the "Act on Promotion of Research and Development and Utilization of Artificial Intelligence-Related Technologies" (the "AI Bill").1 The AI Bill is Japan's first law expressly regulating AI. The Bill states that the government will establish an AI Strategy Center (explained further below) that would likely launch in summer 2025, and the Fundamental AI Plan would likely be implemented within the year.
Japan had previously favored a soft law approach through the AI Guidelines for Business published in April 2024, and later updated in December 2024 and March 2025 (the "Guidelines for Business").2 However, in August 2024, the Cabinet Office formed the AI Institutional Study Group with a mandate to determine, in consultation with various stakeholders, the next steps for AI regulation in Japan. On February 4, 2025, they published an interim summary of their findings.3 The basic principle of the Group's findings was to promote the innovation of AI within Japan while mitigating risks.
The AI Bill primarily focuses on establishing core principles for the research, development and use of AI, formulating the government's Fundamental Plan for AI, implementing basic national policies, and establishing an Artificial Intelligence Strategy Center.
The majority of its provisions set forth a framework for future laws and policies related to the goals surrounding AI, rather than imposing specific requirements now. There are provisions that directly impact private businesses and place responsibilities on AI developers, AI providers, and business users (each an "AI Business Actor"). For example:
Article 7 – The duty of AI Business Actors to make reasonable efforts to use AI to improve the efficiency and overall level of their business in line with the core principles of the Bill.
Articles 4 and 5 – Gives national or local government bodies the authority to create policies that impact AI Business Actors.
Article 16 – The duty of AI Business Actors is to cooperate with investigations and/or guidance given by governmental organizations.
Article 25(2) – Gives the AI Strategy Center the general ability to request cooperation from any entity as it deems necessary to perform its duties.
The AI Bill itself does not provide for penalties or fines for failing to cooperate. However, Article 16 gives the government broad authority to take measures based on investigations it conducts. One such measure the government is considering is to publicly name companies who refuse to cooperate, but there are no specific provisions in the Bill.
While the AI Bill itself does not provide detailed guidance for AI Business Actors, Article 13 states that the government will continue to publish updated guidelines regarding proper research, development, and use of AI.
Separately, Japan has published the Hiroshima International Guiding Principles for Organizations Developing Advanced AI Systems (the Hiroshima Principles), which aim to establish and promote guidelines worldwide for safe, secure, and trustworthy AI.4
As stated above, the AI Bill was approved by the Cabinet Office on February 28, 2025, and approved by Parliament on May 28, 2025. The AI Strategy Center will likely launch sometime in the summer of 2025, and the Fundamental AI Plan is expected to begin to be implemented within the year.
Various laws, though not adopted specifically to regulate AI, are likely to affect the development or use of AI in Japan. A non-exhaustive list of key examples includes:
There have also been efforts to legislate the takedown and prevention of fake or wrong content. On May 9, 2024, the Information Distribution Providers Act (which is the amendment of the existing Providers' Liability Limitation Act) was passed by the Diet (the national legislature of Japan) with an aim to expedite content takedown requests. Even though it is not an AI-specific law and does not address AI-generated content, the Draft Discussion Points refer to it as a means for addressing risks arising from AI.
In addition, various acts involving AI may already be caught by the Criminal Code: using AI to defame another could fall under the ambit of defamation; using AI-generated fake content to interfere with someone's business could be punishable as an obstruction of business; and giving unauthorized commands to another person's computer can also be punishable under the criminal code.
"Artificial Intelligence-related technologies" is defined under Article 2 of the AI Bill to mean: the technology necessary to realize functions that substitute human cognitive, reasoning, and judgment abilities through artificial means. It also includes technology related to information processing systems that process information using such technology and outputs results.
The AI Bill does not clearly set out a territorial scope, however, Article 16, which sets out the duty to cooperate with government investigations, states that the government will be analyzing information regarding the development and use of AI domestically and abroad. Furthermore, the AI Bill does not distinguish between on-shore and off-shore entities in its definition of research institutions or AI Business Actors. However, the scope may be further refined by the AI Strategy Center as they implement the Fundamental AI Plan.
The AI Bill is applicable broadly to AI developers, AI providers, and business users, regardless of their sector.
As noted above, private businesses are under a duty to make reasonable efforts to use AI in accordance with the Bill's core principles. They also must cooperate with the government's investigations into the development and use of AI. AI Business Actors must also follow any guidance, advice, or request that the government might issue post-investigation.
The AI Bill states that its core principles are to promote competitive productivity, ensure transparency, and to foster productivity through advancing fundamental AI research and personnel development. While advancing these goals, the government aims to investigate and guide organizations away from the improper use of AI, which are uses that may lead to the violation of rights or harm prosperity.
Additionally, the Hiroshima Principles identify several significant risks, including: disinformation, copyright, cybersecurity, risks to health and safety, and societal risks (e.g., the ways in which advanced AI systems can give rise to harmful bias and discrimination).
The government has noted the following risks as ones that Japan should prioritize: safety, privacy and fairness, national security and crime, property protection, and intellectual property.
Further, regarding copyright, in-depth discussions are being held in Japan about how existing laws (i.e., the Copyright Act of Japan) should address issues concerning rights and harms that may arise from generative AI. Additionally, the Council of the Agency for Cultural Affairs has announced its position regarding copyright where AI is trained on the works of humans and AI-generated content.5
There are no specific risk categories defined in the AI Bill. The investigation right granted to the government is aimed at the research, development or use of AI for improper purposes. Improper purposes are described broadly as potentially leading to the violation of rights or harming prosperity.
As noted above, the AI Bill does not provide for detailed compliance obligations, but requires AI Business Actors and research institutions to cooperate with any investigations and follow guidance.
The current Guidelines for Business do provide certain general principles which AI Business Actors are expected to incorporate into the training and deployment of their products and services.6 However, it is up to each AI Business Actor to determine how to give effect to the principles. The principles are:
The AI Bill does not specify a regulatory agency that will conduct investigations, but the AI Strategy Center will be responsible for the creation and implementation of the Fundamental AI Plan. Additionally, the following ministries and agencies have been substantively engaged in establishing and promoting guidelines regarding AI:
Guidelines promulgated by ministries in Japan are often followed closely by companies and the public, even though they are not binding law.
As noted above, the AI Bill imposes the duty to make reasonable efforts to: (i) use AI in accordance with the Bill's core guiding principles; (ii) cooperate with investigations; and (iii) follow guidance issued as a result. There are no specific penalties or fines associated with refusing to cooperate, but the AI Bill gives the government broad authority to take actions it believes are necessary based on its investigations and to fulfill its duties.
1 See the Act on Promotion of Research and Development and Utilization of Artificial Intelligence-Related Technologies (in Japanese) here.
2 See the AI Guidelines for Business Version 1.1 here.
3 See the interim summary published by the AI Regulation Research Group here.
4 See the Hiroshima International Guiding Principles for Organizations Developing Advanced AI Systems here.
5 See the General Understanding on AI and Copyright (March 15, 2024) here. The overview in English is available here.
6 See the AI Guidelines for Business Version 1.0 (here), Part 2C
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.
© 2025 White & Case LLP