Our thinking

New frontiers: How AI is transforming the life sciences industry

Taking the pulse of AI in the life sciences sector and exploring how organizations can maximize opportunities from this rapidly evolving technology to build healthier futures

Embracing change

The convergence of artificial intelligence (AI) and life sciences is no longer a distant promise. Companies operating in the sector are actively embracing the technology and are already achieving measurable results. In the following exclusive report from White & Case, in association with Mergermarket, this new reality is explored in depth. Drawing on a proprietary survey of senior executives spanning human pharma and biotech, healthcare provision, medical devices and animal health, the report provides a comprehensive overview of where the sector stands and where it may be heading.

Recent market data demonstrates the scale and urgency of this shift. AI in the pharma market alone is projected to reach US$25.7 billion by 2030, up from around US$4 billion today, according to market research firm Mordor Intelligence. AI-driven drug discovery is also expected to exceed US$20 billion by 2030, per research organization Grand View Research, as firms seek faster routes to novel compounds and more precise trial matching. These forecasts underscore that AI is much more than merely a back-office optimization tool; it is becoming integral to how life sciences companies design, test and deliver therapies, with growing expectations from regulators, investors and patients alike.

Our findings confirm this transition of AI from experimentation to practical application. Tools are being embedded in product design, trial optimization, diagnostics, drug target identification and commercial execution. Organizations are also adapting internally—reassessing governance structures, workforce capabilities and legal frameworks to ensure AI can scale sustainably and in compliance with complex legal frameworks. Board-level involvement is growing, and forward-looking investment strategies are being developed to match the pace of innovation.

This research explores the sector's priorities and pain points in detail. The report begins by mapping current use cases and business goals, showing how companies are deploying AI to address real operational needs—from shortening development cycles to improving diagnostic accuracy. It then turns to the structural challenges that remain, including the legal and regulatory complexities surrounding general AI deployment and use, data protection, intellectual property (IP) and cross-border compliance. These risks are shaping how organizations approach partnerships, procurement and policymaking.

Investment is a central theme. Budgets are shifting from discretionary pilots to embedded line items, with many companies pursuing joint ventures, acquisitions or internal buildouts to accelerate capability development. Local sourcing is often favored, but appetite for cross-border expansion remains in markets with advanced regulatory pathways or concentrated AI talent.

In conclusion, the report examines how success is being defined and why it matters. Metrics such as diagnostic accuracy, cost reduction, and patient access are becoming essential to both internal planning and external validation. Encouragingly, the vast majority of respondents believe AI will improve patient outcomes, while investors increasingly view AI maturity as a signal of innovation-readiness and long-term value creation.

With AI moving rapidly up the agenda in boardrooms and regulatory agencies, understanding how to scale responsibly and legally is critical. This report offers a grounded view of what effective AI adoption in life sciences looks like today—and where the next key opportunities and risks lie.

Methodology

In 2025, White & Case, in partnership with Mergermarket, surveyed 200 senior executives of life sciences organizations. The organizations surveyed included human pharma and biotech companies (75), healthcare providers (50), medical device companies (50) and animal health companies (25). Respondents from each company type were split equally between EMEA (66), Asia-Pacific (67) and North America (67).

The state of the market

organic molecules

Opportunities in AI

petri dish

Patient, commercial and regulatory concerns

DNA sequences

How companies are investing in AI

MRI scans

Conclusion: A healthy future for AI in the life sciences arena

Orion spacecraft

Five key takeaways

stethoscope
DNA sequences

Patient, commercial and regulatory concerns

Insight
|
7 min read

Key takeaways

01

Data security and high costs rank highest as the main practical obstacles to greater AI adoption

02

Patient privacy/Data protection and contractual and licensing issues are most commonly viewed as notably pressing legal concerns

03

A clear majority are concerned about potential liability for intellectual property (IP) infringement related to the use of AI systems

04

Fewer than half of all companies frequently discuss AI governance at board level

While the implementation of AI is growing apace, obstacles to deeper adoption still remain. These pressure points are consistent across subsectors: protecting sensitive data; integrating tools with legacy systems; clarifying legal and IP risks; and turning governance policies into real-world practices.

Data security tops the list of practical challenges, cited by 55 percent. The concern is clear: AI workflows often touch highly sensitive information—patient records, safety data, manufacturing parameters and commercial strategy. Missteps can trigger regulatory scrutiny, legal liability and reputational damage.

Security issues are made more complex by the way in which AI systems aggregate data from many sources, move it across teams and borders, and sometimes introduce third-party platforms into the mix.As one healthcare provider executive says: "Sensitive information may be exposed to cyber threats. Given the sophisticated cyberattacks that we see today, we do not want to risk broader use of data."

Rather than bolting on security as an afterthought, companies making steady progress tend to limit the volume of sensitive data in the first place. Common strategies include restricting how many systems a model touches, pulling only the fields needed and masking data for experimentation. Encryption in transit and at rest is standard, but there is growing emphasis on minimizing duplicates and knowing exactly where third-party vendors store or access data.

Security concerns sit alongside high costs (46 percent), legacy integration challenges (39 percent), scalability issues (38 percent) and skills gaps (38 percent) as day-to-day hurdles—and they are often intertwined. Older clinical and manufacturing systems were not designed for the volume and cadence of AI workflows, and connecting them safely takes time.

Indeed, there can be issues with integration because many AI tools are incompatible with outdated infrastructure and systems, meaning organizations may have an AI tool on the one hand and current infrastructure on the other, and these cannot be easily bridged.

Moreover, the talent needed to stitch modern data tooling into regulated environments remains in short supply, which can compound integration delays even when funding is available. "We've been struggling with skills gaps for completing AI-related projects," notes the head of technology of an animal health company in India.

Some respondents also cited the difficulty of retaining AI talent in competitive markets, particularly where public-sector salaries or rigid hiring structures make it hard to match industry benchmarks.


Legal and IP concerns

Legal concerns are dominated by two issues: patient privacy and data protection (42 percent) and contractual/licensing risk (42 percent). The breakdown varies by subsector. Healthcare providers, for example, place far more weight on privacy (66 percent) than any other respondent type.
"If we are unable to protect patient data, we risk reputational damage," says the COO of a healthcare provider. "Mitigating the risk of legal claims and settlements is important to avoid any financial pressure on the company."

42%

Percentage of respondents who place patient privacy and data protection among their top two key legal risks relating to the implementation of AI

For pharma companies, this appears to be less of a concern because the use of AI in drug development involves mapping molecules and their mechanisms of action to try and identify targets. This means that there are inherently fewer privacy and individual personal data protection issues for those organizations.

Animal health companies are more likely to cite licensing risk (60 percent), which aligns with their broader use of third-party tools and reliance on data from dispersed clinics and farms. Medical device companies, meanwhile, frequently highlight cross-border jurisdictional issues (40 percent) and licensing complexity (44 percent), given the multi-market nature of product development, field connectivity and post-market surveillance.

These concerns are not theoretical. Many valuable AI inputs—chemistry datasets, proprietary models, third-party databases and data sourced from contract research organizations (CROs)—are governed by restrictive contracts. Using them for training or fine-tuning without clear rights can lead to breach-of-contract claims, even when copyright law is less definitive.

"There could be the risk of using copyright materials for training AI," says the director of innovation of a Taiwanese healthcare provider. "Developers who do not have complete knowledge of these issues may do so unknowingly."


The IP question

IP protection is also a grey area. While 31 percent of respondents are very concerned about potential IP infringement from using AI, another 51 percent are somewhat concerned. Just 18 percent are not worried. These views are fairly consistent across sectors.

Meanwhile, 60 percent of all respondents judge current protections for AI-assisted outputs to be weak, rising to 80 percent in animal health. In regional terms, in Asia-Pacific, the figure hits 85 percent, compared with 44 percent in EMEA. Uncertainty over who owns model-influenced designs or content, and whether those outputs meet patentability or authorship thresholds, is a recurring theme.

Enforcement uncertainty compounds the problem. When model-assisted content is shared across jurisdictions, companies face a patchwork of standards governing authorship, database rights and inventorship, each of which can affect whether AI-influenced innovations can be protected or commercialized.


Governance, training and board oversight

Many companies are taking steps to improve oversight. A solid majority (63 percent) now have formal AI training programs in place, rising to 72 percent in human pharma. This trend is likely to accelerate.

Under the EU AI Act, companies that develop, deploy or use high-risk AI systems—including many tools used in clinical decision-making, diagnostics and other medical device software—must ensure that relevant personnel receive appropriate training.

When model-assisted content is shared across jurisdictions, companies face a patchwork of standards governing authorship, database rights and inventorship.

Training must cover how the system works, the intended use, known limitations and how to exercise meaningful human oversight, particularly where patient safety or product quality is at stake. This includes not only technical staff, but also those involved in the use, supervision and governance of AI systems. Under the EU AI Act, these requirements have been in effect since February 2025, meaning companies must act now to ensure compliance, particularly those operating in EU markets or selling high-risk AI systems there.

The goal is to ensure that humans remain meaningfully involved and accountable when relying on complex or opaque systems. In practical terms, this means companies must formalize training programs, keep records of participation and update materials in line with system changes or regulatory updates.

For multinational life sciences organizations, even if headquartered outside the EU, especially those marketing products in the EU, these training requirements are fast becoming non-negotiable. As a result, until a change of the AI Act, documented, role-specific training is shifting from best practice to regulatory obligation.

Human pharma also leads on broader governance. Nearly two-thirds (64 percent) report having an AI risk-management strategy, compared with 40 percent in devices. This reflects pharma's more advanced use of AI in R&D and safety monitoring. Meanwhile, animal health firms report the highest incidence of AI-specific use policies (60 percent), driven by the fragmented nature of their clinical settings and data sources.

Board-level attention varies. Overall, 48 percent of respondents say AI is frequently discussed at the board level, but the figure rises to 64 percent in medical devices and 56 percent in human pharma. Only 32 percent of animal health companies and 30 percent of healthcare providers report the same. Regionally, North America leads (60 percent), followed by EMEA (47 percent) and Asia-Pacific (39 percent).

A vice president of a life sciences multinational notes: "AI is not a magic wand, so we're careful about piloting and ensuring compliance, especially on privacy and regulatory fronts. Internally, we've got AI tools available across the business, and there are flagship AI projects led by our executive committee focused on simplification and optimization."


Legal uncertainty

There is also a pervasive sense that legal frameworks are still catching up. Two-thirds of respondents (66 percent) agree that lack of legal certainty is a barrier to adoption. That figure jumps to 84 percent in animal health.

The concerns are not just theoretical. Respondents point to shifting requirements around documentation, life cycle monitoring, data transfers and contracting norms. In clinical settings, uncertainty also surrounds how professional accountability or product liability will work when AI contributes to decisions.

As the CEO of a healthcare provider based in Southeast Asia says: "Since AI is still evolving, and regulators are trying to control the scope of usage, some of the legal challenges remain unknown. Especially when it comes to selected aspects, the laws are changing and it creates uncertainty for us."

Even as regulatory guidance improves, the diversity of stakeholders and jurisdictions involved means AI governance will remain complex. For now, companies must build processes that are flexible, transparent and grounded in clear documentation, even when the rules remain in flux.

White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.

This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.

© 2026 White & Case LLP

Top