AI Risks Legal Sector Must Consider In Dispute Resolution

Article
|
9 min read

Two recent cases from the English High Court, Ayinde v. London Borough of Haringey and Al-Haroun v. Qatar National Bank and QNB Capital come as cautionary tales regarding the use of artificial intelligence, highlighting some of the dangers that must be borne in mind when employing AI for litigation.1

While these are litigation cases, both examples can also apply to the field of international arbitration, where optimism about the possible usages of AI needs to be tempered by caution about the risks of embedding the technology in such a subtle and sensitive area of law.

With the proliferation of generative AI, or GenAI, in the early 2020s, AI and its use in the context of dispute resolution has received increasing attention. For legal practitioners navigating increasingly complex and data-heavy legal proceedings, AI and, in particular, GenAI, offers a transformative solution.

Traditionally confined to the document production or discovery phases of a dispute, the evolution of AI presents significant opportunities for accuracy and efficiency gains across the full life cycle of a dispute. The potential result is a more efficient and cost-effective process, allowing practitioners to focus on strategic reasoning, advocacy and client outcomes while reducing administrative burdens.

What Is AI?

The term "AI" can encompass many things. One example, predictive coding, also known as technology assisted review, will be familiar to many legal practitioners, as it is used to facilitate large-scale document review in both litigation and arbitration.

Predictive coding tools use machine learning to identify relevant documents from a subset of so-called seed documents that have been tagged by human reviewers, and are typically used in the context of document review ahead of disclosure. Such technology was endorsed for use in English High Court litigation in Pyrrho Investments Ltd. v. MWB Property Ltd. in 2016.2

More recently, we have seen the increasing popularity of GenAI. In essence, the large language models that power the GenAI systems are able to generate coherent and contextually relevant responses by predicting patterns in language. These models analyze vast amounts of text through an artificial neural network and can produce responses based on the context they are given.

When the user delivers a prompt, the model calculates the most probable next words, based on what it has learned from the substantial data set available, resulting in generally coherent responses.

The rise of GenAI, and especially agentic GenAI systems, has opened the door for more extensive integration of AI across the different stages of a dispute.

Use of AI by Counsel

In arbitration, there is a growing consensus that practitioners are likely to use AI more in the future. Surveys of the international arbitration community by White & Case and Queen Mary, University of London found that, while in 2018, only 12% of respondents said they "frequently" or "always" used AI, by 2021 this had increased to 15%.

When respondents were asked in the 2025 International Arbitration Survey3 what tasks they have been using AI to conduct and how they see the use of AI tools and technology developing over the next five years, fewer than 20% of the 2,402 respondents admitted to using AI "often" or "almost always" across all tasks in the past five years, but a majority of respondents expected to be using AI "often" or "almost always" to conduct factual and legal research, perform data analytics and review documents in the next five years.4

At the same time, there was trepidation among arbitration practitioners, with respondents expressing concern regarding the risk of undetected AI errors and bias (51%) and the risk of ethical infractions (24%).

In the litigation context, tools that leverage the capabilities of AI have already demonstrated value. AI is increasingly used to manage large volumes of disclosure, conduct predictive case analytics and assist with drafting.5

These tools offer significant efficiencies by reducing review time, identifying relevant authorities more quickly and highlighting patterns in complex datasets that might otherwise escape human analysis.

However, the acute risks of uncritical reliance on these tools can be seen in both the Ayinde and Al-Haroun decisions, as well as in cases from other jurisdictions.6

The first case, Ayinde, related to the aftermath of a judicial review proceeding brought by the claimant regarding housing claims. Following that proceeding, the defendant made an application for a wasted costs order against the claimant's legal representatives on the basis that they had cited five fake cases. Wasted costs were awarded by the court and the case was referred to another proceeding under the court's Hamid jurisdiction.7

In the Hamid proceeding, Sarah Forey refused to accept that her conduct was improper and denied that GenAI had been used. The court concluded that this led to "two possible scenarios:" Either Forey had deliberately included fictitious cases in her submissions, or she had in fact used GenAI and subsequently denied doing so. The judge concluded that both cases would be clear contempt of court.

The court called this "wholly improper," rejecting Forey's attempts to downplay them as minor citation errors and focusing on the nondelegable duty not to mislead. The court referred Forey to the professional regulator, emphasizing that although contempt proceedings were not initiated in this instance, lawyers "who do not comply with their professional obligations in this respect risk severe sanction."

The second case, Al-Haroun, dealt with claims for damages for an alleged breach of a financing agreement. Similarly, reliance was placed on fictitious or misquoted authorities, with 18 out of 45 citations in a schedule found to be nonexistent. Strikingly, one of the cases put before Judge Julia Dias was a fake decision attributed to Judge Dias herself.

The claimant's solicitor, Abid Hussain, admitted to relying on research conducted by his client and using public GenAI tools to produce citations. Hussain's actions were deemed a "lamentable failure to comply with the basic requirement to check the accuracy of material that is put before the court." The court referred Hussain to the Solicitors Regulation Authority, but found that in this case the threshold for contempt of court was not met.

Against that background, it is important to bear in mind that while AI may assist in preparing for cases, accuracy and integrity rests with lawyers, every citation must be verified against authoritative sources before filing. A lawyer who misleads the court, even inadvertently, can still be "incompetent and grossly negligent."8

Use of AI by Decision-Makers

The impact of this technology is expected to extend beyond counsel, with use of AI by both arbitrators and judges expected to increase.

However, there is less consensus as to which tasks arbitrators should be using AI for, in particular as concerns decision-making. For example, only in the aforementioned arbitration survey, 31% of arbitration practitioners approved of AI use in assessing the merits or accuracy of submissions and evidence, while the vast majority of arbitrators and counsel disapproved of the use of AI by arbitrators to draft the reasoning in awards and decisions.9

A reasoned decision on the merits requires explanation of how the arguments of both sides have been assessed and evidence weighed. As long as AI presents a so-called black box, where the process behind the output remains opaque, trust in such tools to take on the decision-making aspects of an arbitrator's role will remain low.

On the litigation side, the AI Guidance for Judicial Office Holders, published in April,10 underscores the need for the judiciary to have a basic understanding of the capabilities and limitations of AI tools.

In recognition of the practical applications of GenAI, Microsoft Inc.'s Copilot Chat is available to the judiciary. The guidance even lists some potentially useful tasks for AI, such as summarizing large amounts of text or preparing presentations.

At the same time, the risks of AI being used for decision-making are also recognized. The guidance emphasizes that AI must not intrude on core judicial functions, such as legal analysis or reasoning.

Judges are reminded to verify the accuracy of any AI-generated content, avoid entering confidential or private information into public AI tools, remain accountable for all material issued in their names, and be cautious of the risks of bias, inaccuracies and potential misuse by litigants.

Takeaways

The core lesson is clear: AI can support arbitration and litigation but must be used with care. The general view among practitioners, reflected in the issuance of guidance by institutions and regulators,11 is that AI presents a transformative opportunity and will become increasingly integrated in the daily tasks of counsel, arbitrators and judges alike.

Cases such as Ayinde and Al-Haroun are, however, a sobering reminder of the potential for misuse and negligence arising from the use of such tools. With that in mind, practitioners and decision-makers should consider how their core ethical duties, including honesty, integrity and the obligation not to mislead the court or tribunal, apply to AI use.

In this connection, individuals would do well to note the advice being issued by regulators and institutions, which highlight confidentiality, privacy and hallucination risks associated with AI-assisted work, and the measures required to address these.

1 Ayinde v. London Borough of Haringey, and Al-Haroun v. Qatar National Bank [2025] EWHC 1383.
2 Pyrrho Investments Ltd v. MWB Property Ltd. [2016] EWHC 256 (Ch).
3 2025 International Arbitration Survey by White & Case and Queen Mary University of London.
4 Being (i) conducting factual and legal research, (ii) data analytics, (iii) document review, (iv) drafting correspondence, (v) drafting submissions, and (vi) evaluating legal arguments.
5 See, for example, https://lidw.co.uk/actionable-insights-the-new-ai-in-litigation/.
6 As cited in the Ayinde cases, examples include Southern District of New York, Mata v. Avianca Inc., Case No. 22-cv-1461 (PKC), 2023 WL 4114965 (SDNY 22 June 2023); the District Court for the Central District of California, Lacey v. State Farm General Insurance Co. CV 24-5205 FMO (MAAx), 6 May 2025; the Australian Federal Circuit and Family Court, Valu v. Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95 and the Supreme Court of British Columbia, Zhang v. Chen [2024] BCSC 285.
7 The court's Hamid jurisdiction relates to the court's inherent power to regulate its own procedures and to enforce duties that lawyers owe to the court.
8 Ayinde v London Borough of Haringey, and Al-Haroun v Qatar National Bank [2025] EWHC 1383 at [12] citing the Bar Council, "Considerations when using ChatGPT and generative artificial intelligence software based on large language models", January 2024, para. 17.
9 2025 International Arbitration Survey by White & Case and Queen Mary University of London.
10 Guidance for Judicial Office Holders on Artificial Intelligence: https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version.pdf.
11 See, for example: Chartered Institute of Arbitrators Guideline on the Use of AI in Arbitration: https://www.ciarb.org/media/m5dl3pha/ciarb-guideline-on-the-use-of-ai-in-arbitration-2025-_final_march-2025.pdf; https://www.barcouncilethics.co.uk/wp-content/uploads/2024/01/Considerations-when-using-ChatGPT-and-Generative-AI-Software-based-on-large-language-models-January-2024.pdf; https://www.lawsociety.org.uk/topics/ai-and-lawtech/generative-ai-the-essentials.

This article was first published by Law360 on September 25, 2025. For further information please visit Law360 website.

This publication is provided for your convenience and does not constitute legal advice. This publication is protected by copyright.

Top