Recently, the European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) announced that they will collaborate on the regulation of Artificial Intelligence (AI). The goal is to put in place clear and appropriate guardrails for the life science industry on the use of AI in the medicines life cycle. The shared intent of the agencies is to promote responsible innovations while supporting the creation of an environment of regulatory certainty.
Background
In the context of increased globalization of medicinal product development, since 2004, the FDA and EMA have collaborated on a wide range of therapeutic areas and types of products, including but not limited to biosimilars, vaccines, pharmacogenomics, patient engagement and advanced-therapy medicinal products. These bilateral collaborations are grouped into so-called “cluster activities”. Other global regulators, including Health Canada, the Japanese Pharmaceuticals and Medical Devices Agency and the Australian Therapeutic Goods Administration, participate in some of the clusters. These dialogues have intensified and deepened over time.
In addition to the long-standing collaboration between EMA and FDA in the clusters, there are other forms of joint activities of the regulators: The International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) brings together regulatory authorities and the pharmaceutical industry and produces harmonized technical requirements for the development of medicines. The International Conference of Drug Regulatory Authorities (ICDRA) serves as a global forum for regulators, policymakers, and stakeholders to strengthen drug regulatory systems. Further, the EMA and the FDA, both, are members of the International Coalition of Medicines Regulatory Authorities (ICMRA), a forum set up to provide strategic coordination, advocacy and leadership, and both are members of the ICMRA Informal Network for Innovation working group.
The rapid advancements in AI require medicines regulators to develop smart and flexible approaches to regulation. However, until now, there has been no “cluster” for the bilateral collaboration on AI. Since FDA and EMA have previously highlighted the importance of AI in the medicines lifecycle, such collaboration was overdue.
EMA’s Network Data Steering Group’s AI Workstream
In the European Union (EU), the ecosystem of laws and guidance surrounding AI is increasingly sophisticated, encompassing, inter alia, the EU AI Act, the EMA AI reflection paper, the Good Manufacturing Practice’s (GMP) annex 22, and interactions with other relevant laws and guidance, such as on data protection. Considering this, the EMA’s Network Data Steering Group (NDSG) has defined an AI workstream in its 2025–2028 workplan. The workplan sets out how the European medicines regulatory network (i.e., the national competent authorities, the EMA and the European Commission) plans to leverage AI to harness systems efficiency, increased insights into data and strengthened decision-making, for the benefit of public health. According to EMA and the Heads of Medicines Agencies (HMA, i.e., the network of the national competent authorities), application of AI requires a collaborative, coordinated strategy to maximize the benefits from AI while ensuring that uncertainty is adequately explored, and risks are mitigated.
Within the AI workstream, the NDSG focuses on four pillars, including collaboration and change management. According to the workplan, this AI workstream pillar involves the NDSG continuing to contribute to work with partners on AI internationally, including with the International Coalition of Medicines Regulatory Authorities (ICMRA).
Deliverables under the New Bilateral AI Collaboration
In November 2025, the EMA and HMA held the first multi-stakeholder AI workshop under the new NDSG workplan. On day 1, in a “Spotlight talk: Opportunities for international convergence”, an NDSG member and Senior Assessor with the German regulator and the FDA’s acting Associate Director for Data Science and AI Policy outlined the new AI collaboration between the EMA and the FDA. What we know so far is that the collaboration is set to start with two topics and corresponding deliverables.
- First, a set of ten Guiding Principles of Good AI Practice in Drug Development has been jointly developed by the EMA and FDA and published as best practices for AI. The guiding principles for Good Machine Learning Practice (GMLP) for medical devices have been used to kick-start the discussions. The principles are relevant for companies developing medicines, as well as for marketing authorization applicants and holders. They give guidance on AI use in evidence generation and monitoring across all phases of the medicine life cycle, from early research and clinical trials to manufacturing and safety monitoring. A principles-based approach will help regulators, pharmaceutical companies and medicines developers harness the potential of AI while ensuring the benefit and safety of patients and regulatory compliance.
Our key takeaways: The shared strategic goal is to leverage AI to dramatically shorten Time to Market and improve toxicity/efficacy predictions – explicitly aiming to reduce reliance on animal testing – while ensuring safety. With the human-centricity there is a shift to focus on ethics and values, not just technical validation. The principle of data traceability implements a standard where data provenance and processing steps must be detailed and verifiable – moving away from “black box” AI. The life cycle management shifts the regulation of static products to continuous monitoring of dynamic systems. The principle of transparency is nothing other than a mandate to make complex outputs accessible in "plain language" to patients. - The second product is to be a harmonized glossary on AI Terms. For this, the EMA and FDA will map the terminology used by both agencies and come to a consensus on the legal terms that govern each region’s use of AI, where possible. Harmonized terminology is a prerequisite for true global interoperability. As the agencies emphasize lifecycle-wide and cross-border use of AI, common definitions would allow sponsors to run multi-regional clinical trials without having to “translate” validation and governance frameworks across jurisdictions—a persistent friction point in practice. Stakeholders should keep an eye out for the second deliverable, which is still under development.
Conclusion
The collaboration and aligning topics on AI between EMA and FDA are particularly important regarding things where guidelines do not yet exist. The collaboration on principles and terminology is an initial step to explore opportunities for further convergence on future global regulatory standards on AI.
1 See the European Medicines Agency’s website, available here.
2 See the International Coalition of Medicines Regulatory Authorities’ website, available here.
3 See the Network Data Steering Group’s workplan 2025-2028, available here.
4 See the Agenda - HMA/EMA Multi-stakeholder workshop on Artificial Intelligence (AI), available here.
5 See the FDA’s and EMA’s Guiding Principles of Good AI Practice in Drug Development, published on 14 January 2026, available here and here.
6 See the final document “Good machine learning practice for medical device development: Guiding principles” released by the International Medical Device Regulators Forum (IMDRF) in January 2025, available here.
Saar Neri (White & Case, Law Clerk, Boston) contributed to the development of this publication.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities. This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice. © 2026 White & Case LLP