Industry perspectives on the state of compliance today and effective strategies for managing compliance risk within the changing regulatory landscape
In a world that moves at break-neck speed, corporate legal and compliance teams have never faced greater pressure to stay ahead of the game. The result is a function that is not just reactive to risk, but increasingly proactive in shaping corporate behavior and decision-making.
This year’s Global Compliance Risk Benchmarking Survey offers a timely snapshot—based on insights from 265 senior compliance, legal and risk professionals worldwide—of how today’s legal and compliance leaders are adapting to new technologies, regulatory expectations and cultural shifts in business conduct.
The themes explored in this year’s survey reflect the changing nature of legal and compliance risk management. Artificial intelligence (AI) is becoming an operational reality within legal and compliance teams. Our findings show that while a growing number of organizations are deploying AI to drive efficiency and clarity in investigations and reporting, concerns about accuracy, governance and data privacy remain significant. As adoption increases, so does the need for guardrails to ensure that the use of AI enhances—rather than undermines—operational integrity.
We explore not only whether organizations are using AI, but also how long they have been doing so; the primary motivations driving adoption; the specific uses being prioritized; and the perceived advantages gained by users. Crucially, we also investigate the key concerns surrounding AI utilization; the prevalence of governance policies; the integration of AI risk into broader enterprise risk management (ERM) frameworks; and controls being implemented to ensure the trustworthiness and reliability of these tools.
Additionally, we examine the use of off-network messaging applications—tools that are convenient for employees, but often challenging for legal and compliance teams to monitor and access. The findings suggest that while many companies are implementing written policies, only a minority actively collect or audit off-network communications, raising questions about whether they do and, if so, how well these policies are being enforced and whether they are sufficiently comprehensive in scope, as well as emphasizing the importance of clear risk leadership and the right “tone from the top”. Regulators are watching this space closely, and companies must consider whether their current approaches are sufficient in both spirit and substance.
The conversation around compliance incentivization shows promising signs of maturity. Many organizations are now integrating compliance metrics into compensation and performance frameworks. This finding suggests a shift from relying solely on punitive measures toward building a culture where ethical behavior is actively recognized and rewarded. Yet, the effectiveness of these programs depends not just on their existence, but on whether and, if so, how consistently they are implemented and whether they are aligned with broader business goals. The survey sheds light on the growing use of compliance-linked key performance indicators (KPIs) and how these are shaping both corporate culture and accountability.
In the final section, the report explores how companies are approaching voluntary self-disclosure to the United States Department of Justice (DOJ). While many companies now have formal processes to assess potential misconduct and to consider self-reporting, concerns about cost, reputational risk and the perceived benefits of disclosure continue to hold some organizations back. These concerns should be considered in the context of the global landscape. It remains to be seen, for example, the extent to which updated UK guidance on corporate self-reporting will factor into the equation for multinational organizations.
Together, these findings offer a nuanced view of how legal and compliance teams are navigating the demands of a digital, distributed and demanding business environment. From emerging technologies to traditional risk domains, the survey provides practical benchmarks and insights for organizations aiming to build resilient, forward-looking compliance programs.
We hope you find this year’s report both informative and thought-provoking.
Key takeaways
Given the far-reaching nature of the survey and the findings within, as well as the changing nature of the compliance function, below are five takeaways that every legal and compliance leader should keep front of mind.
1. AI adoption is accelerating—and governance must keep pace
As more compliance teams deploy AI to streamline investigations and analyze risk, oversight frameworks need to evolve in parallel. Clear internal policies, strong ERM integration and proactive controls are essential to avoid over-reliance and ensure ethical, defensible use of these tools.
2. Managing off-network messaging is now a baseline expectation
Having a policy on off-network messaging is no longer a differentiator—it’s a minimum requirement. Policy enforcement mechanisms, such as backup requirements and audit trails, are the next frontier, and organizations lagging here risk falling short of regulatory expectations.
3. Compliance incentives are working—but must go deeper
Tying compensation and recognition to compliance outcomes is gaining traction and positively shaping behavior. To be effective, however, these programs must apply across employee levels and extend to third parties. Selective or symbolic application risks undermining their impact.
4. Voluntary disclosure is still a difficult choice; decision frameworks help
While concerns about cost, reputational harm and prolonged regulatory scrutiny persist, many organizations are still investigating and remediating misconduct—even when they opt not to self-disclose to the DOJ. The trade-offs are real: Voluntary self-disclosure may lead to reduced penalties and credit for cooperation, but it can also trigger intense external investigation, significant legal fees and public exposure. Building robust internal frameworks to assess these scenarios—and engaging regulators early where appropriate—can help organizations make more confident, consistent decisions.
5. Compliance is becoming a strategic function
As risks grow more complex and digitalized, the compliance function is evolving into a strategic advisor to the business. This shift not only requires more resources, but also a change of mindset—embedding compliance thinking into executive-level planning.
Artificial intelligence in the compliance function
Artificial intelligence in the compliance function
Insight
|
10 min read
Key takeaways
01
AI adoption in compliance and investigations is gaining traction, especially among larger and publicly listed companies
02
Current usage patterns suggest that most respondents using AI are still in the early stages of their journey
03
Efficiency and cost savings are the primary motivators for AI implementation
04
Current use cases center on document summarization and review and assisting with risk assessments and regulatory updates, with more advanced uses still emerging
05
Respondents that are larger organizations report higher satisfaction with AI tools, likely because they have used AI for a longer period of time and have achieved better integration
06
Formal policies and risk controls around AI use are more common in high-revenue and public companies
At the most fundamental level, AI is no longer a niche tool, but a technology gaining serious traction
Watch: Partner Perspectives
Legal strategy is now AI product strategy
In this episode, AI partners Erin Hanson, Hope Anderson and Tim Hickman discuss how companies can align legal strategy with product development from the outset. From data governance to global regulation, their message is clear: "AI is moving fast. So is the law. The winners will be those who align early."
Corporate compliance is undergoing a seismic shift due to the transformative effect of digitalization and, in particular, AI. Once a futuristic concept, AI is rapidly becoming a mainstream, even business-critical, technology for legal and compliance functions globally. As organizations grapple with an increasingly complex regulatory environment, exponential data growth and relentless pressure to operate more efficiently and effectively, AI presents both unprecedented opportunities and novel challenges.
Our findings reveal a period of transition—one where early adopters are realizing tangible benefits, while also running up against growing pains such as implementation challenges, gaps in policy development and the inherent risks of deploying this transformative technology.
AI adoption trends
At the most fundamental level, AI is no longer a niche tool, but a technology gaining serious traction, albeit with adoption levels varying considerably across different organization types and sizes. Overall, 36 percent of respondents report using AI in both their compliance and investigations processes, with a further 26 percent using it for compliance tasks only.
This adoption is notably higher among certain segments. Respondents that are publicly listed companies are almost twice as likely (44 percent) to use AI for both compliance and investigations compared with their private sector counterparts (23 percent). This disparity likely reflects the larger data volumes and potentially higher investment capacity often associated with public entities, and potentially the correspondingly greater expectations from regulators regarding use and deployment of data analytics in underlying compliance programs. Similarly, corporates show significantly higher adoption of AI (43 percent) compared with private equity firms (10 percent), suggesting differences in operational scale, risk appetite and/or the immediate perceived need for AI-driven compliance between these different types of businesses.
Organizational size and revenue generation show a strong positive correlation with AI adoption. Nearly six out of every ten (59 percent) of the highest revenue-generating respondents already leverage AI for both compliance and investigations, a stark contrast to the 14 percent adoption rate among the lowest revenue-generating respondents. This finding highlights a resource gap, where larger organizations possess the financial means, technical expertise and the necessary data infrastructure to invest in and deploy AI more readily.
The tenure of AI usage reveals that while adoption is growing, it is still a relatively recent phenomenon for many organizations. Among respondents currently using AI, the largest cohort (36 percent) has been using it for one to two years, closely followed by those using it for a year or less (34 percent). A wave of adoption has occurred within the past two years, driven in part by pandemic-era digitalization trends, but even more so by the rapid mainstreaming of generative AI models and other scalable tools that have made the technology newly accessible and applicable to legal and compliance teams.
Again, respondents that are larger organizations demonstrate longer-term engagement with AI. Almost half (46 percent) of the highest revenue organization respondents have been using AI for two to five years, compared with just 11 percent of the lowest revenue organization respondents. As such, organizations that have used AI longer perceive the value of the technology as higher and have developed more sophisticated use cases.
The rationale behind implementing AI for compliance and investigations is overwhelmingly pragmatic, focusing on efficiency and resource optimization. For respondents using these tools, the primary drivers are time savings (cited by 73 percent) and cost savings (71 percent). This finding underscores the increasing pressure on compliance functions to "do more with less"—managing escalating risks and data volumes without commensurate increases in headcount or budget. As one member of the ethics and compliance function of a US company said: "We use AI for compliance and investigations to lower the amount of manual work. Manual work has become time consuming due to the changing regulations and the complexity of the process. So, the use of AI became inevitable at a certain point."
AI is viewed as a critical tool to automate repetitive tasks, accelerate analysis and free up compliance professionals for higher-value strategic work.
When examining the specific uses of AI among users in compliance and investigations, a clear focus emerges on the use of AI for tasks involving large-scale text analysis. The top use cases identified are summarizing documents (88 percent) and reviewing documents during investigations (85 percent). This aligns with the strengths of current Natural Language Processing (NLP) and Large Language Model (LLM) technologies, which excel at processing and extracting information from vast amounts of unstructured text data—a commonly shared challenge in compliance monitoring, due diligence and internal investigations.
In particular, the advent of generative AI marks a notable inflection point. Unlike earlier rule-based systems or machine learning algorithms designed for discrete tasks, generative models can summarize, compare, rephrase or even prepare first drafts of compliance documentation in a fraction of the time. This versatility, while powerful, also brings a new class of risks, including potentially opaque decision-making, unexpected outputs and uncertainty around the reliability of AI-generated content. Organizations are still grappling with where to draw the line between helpful automation and risky over-reliance and potential liability exposure.
While current uses of AI deliver on efficiency and cost savings, they represent a relatively narrow band of the technology's capabilities. More sophisticated applications, such as advanced anomaly detection in transactional data or intelligent training personalization, are less prevalent, based on the top responses, suggesting many organizations are still in the early stages of leveraging AI's full potential.
Some organizations are already seeing benefits beyond basic review, however, as noted by a member of the legal function of a Mexico-based company: "We noticed how contextual information is captured and processed by utilizing AI, so we are using it for both compliance and investigations processes. There is a better understanding of everyday and uncommon risks in our activities."
User experience: High engagement and perceived value
Encouragingly, where AI is implemented, user engagement and satisfaction appear to be high. Among respondents in organizations using AI, almost all (96 percent) report personally using AI tools within their role. This level and use indicates that AI is not just running in the background, but is being integrated into the daily workflows of legal and compliance professionals.
Furthermore, the perceived utility is overwhelmingly positive. None of the respondents who personally use AI tools found them unhelpful. Instead, 48 percent rate them as "very helpful," while 43 percent find them "somewhat helpful."
This strong endorsement suggests that once deployed, these tools are meeting user needs and delivering tangible benefits in their day-to-day tasks.
This perceived utility correlates strongly with organizational size and resources. Nearly three-quarters (73 percent) of users at the highest-revenue-generating respondents find AI tools "very helpful," compared with only 37 percent at the lowest-revenue-generating respondents. This disparity is likely attributed to the maturity of AI implementation in larger firms, better integration with existing systems, more comprehensive training, and/or access to more sophisticated, tailored tools, reinforcing the longer tenure findings.
Despite positive user experiences, significant concerns remain regarding the deployment of AI. The key concerns center on data security and reliability. Data protection emerges as the top concern (64 percent), reflecting respondents' anxieties about handling sensitive personal or corporate data within AI systems, ensuring compliance with privacy regulations such as the European Union's General Data Protection Regulation (GDPR) and safeguarding against costly breaches. Inaccuracy (57 percent) is the second major concern, highlighting the risks associated with potential biases in algorithms, "hallucinations" in generative AI outputs, and the consequences of making legal and compliance decisions based on potentially flawed AI analysis.
Concerns about overreliance on AI are more pronounced in respondents that are publicly listed companies (55 percent) than in private organizations (35 percent). This finding may indicate a greater awareness in public companies of stricter governance expectations and a keener sense of the reputational and regulatory risks associated with inadequately supervised AI systems.
One often underestimated hurdle is cultural. Some legal and compliance teams remain skeptical of AI, fearing that automation could either dilute their influence or introduce errors for which they will be held responsible.
Others face institutional silos, where the data required for AI analysis is sequestered in legacy systems or resides in departments that do not coordinate effectively with legal and compliance teams. Without cross-functional alignment, even the most advanced AI models will struggle to reach their potential. Given this reality, it is perhaps not surprising that one of the questions that US DOJ prosecutors are encouraged to ask in the Evaluation of Corporate Compliance Programs (ECCP) guidance when assessing a company's compliance program is whether compliance teams have sufficient access to relevant data sources for timely testing and monitoring of a company's policies, controls and transactions.
Policies and frameworks: Catching up to technology
One often underestimated hurdle is cultural. Some legal and compliance teams remain skeptical of AI, fearing that automation could either dilute their influence or introduce errors for which they will be held responsible
As AI adoption grows, organizations are stepping up by developing governance frameworks, although progress varies. Almost two-thirds (63 percent) of respondents report having a policy governing employee use of AI. A significant gap remains, however, with 26 percent stating they do not currently have a policy but plan to implement one. Policy implementation shows disparities similar to adoption rates: 79 percent of the highest-revenue respondents have an AI use policy compared to only 34 percent of the lowest-revenue respondents. Likewise, publicly listed respondents (75 percent) and corporates (68 percent) are ahead of private companies (44 percent) and private equity firms (30 percent) in establishing these guidelines.
Beyond usage policies, integrating AI considerations into broader risk frameworks is crucial. Currently, 60 percent of respondents consider risks associated with the use of AI and other new technologies as part of their ERM process. AI is also proving to be valuable in navigating the complexities of the regulatory environment. "When there are regulatory updates, we need to make sure that we remain adaptive to these changes," explains a member of the ethics and compliance function of a US company. "AI has transformed the way we adapt to regulatory changes. Existing compliance procedures are altered without many issues. There is more confidence in our compliance management ability overall."
Integrating AI into ERM is again more prevalent among larger and public respondents compared with smaller and private entities. For example, 71 percent of respondents that are public companies incorporate AI into ERM versus 44 percent of private companies, and 79 percent of the highest-revenue respondents do so compared with 30 percent of the lowest revenue ones.
Encouragingly, among the 60 percent of respondents that consider these risks in ERM, a strong majority (79 percent) state they have controls in place to monitor and ensure the trustworthiness and reliability of AI and its use in accordance with applicable law and company policy. This finding suggests that organizations formally addressing AI risks are also actively implementing mitigation measures.
Looking ahead, several forces may accelerate AI integration in compliance. Regulatory bodies are starting to experiment with AI for enforcement and oversight, raising the stakes for regulated entities. At the same time, the emergence of AI-specific audit frameworks and ethical guidelines along with innovation programs run by governments and regulators may help hesitant organizations gain confidence. In the UK, the Financial Conduct Authority (FCA) publishes regular updates on the work it is doing to support the government's "pro-innovation strategy" on AI (this work has included co-authoring a discussion paper on how AI may affect regulatory objectives for the prudential and conduct supervision of financial institutions).
As tools evolve, we may also see a shift from task automation toward decision augmentation—where AI is not just doing the work but helping to shape how compliance professionals think about risk.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.