Five things you should know about… GenAI and Litigation

Alert
|
9 min read

The use of Artificial Intelligence (AI), including Generative AI (GenAI), is fast becoming an established part of legal practice. In September 2025, it was reported that 61% of lawyers in the United Kingdom use a form of AI in their day-to-day work.1 Six months on that figure will almost certainly be higher. 

The rise in AI uptake presents opportunities but also challenges for the legal profession, and the justice system is adapting to the rapid integration of AI use in litigation. Guidance to assist judicial office holders on responsible AI use has been published2 and, as several decisions show, courts are grappling with some of the issues unverified GenAI use creates (such as "hallucinations"3). Recently, the Civil Justice Council (CJC) published an interim report and launched a consultation on the use of AI by legal representatives in preparing court documents.

This is a pivotal moment for the profession. Here are the five key things litigators and their clients need to know about it.

1. AI is a tool – not a lawyer

GenAI systems are fundamentally predictive in nature; rather than reasoning in the way humans do, they learn from patterns in the large datasets on which they have been trained and apply those patterns to generate plausible outputs (often convincingly so). While much work is being done to reduce the occurrence of hallucinations (through, for example, fine-tuning and Retrieval-Augmented Generation), today's GenAI systems still get things wrong. Hallucinations remain a real and present concern.

GenAI performs best in circumstances where output is efficiently verifiable by a human and, worst, where there is a need for an exhaustive answer that is challenging to verify. Certain tasks – such as legal research – are best suited to research-specific AI tools, and not GenAI platforms where the potential for hallucinations and the verification burden mean there is little efficiency to be gained. For other tasks, such as disclosure and search functions, AI resources can be a powerful tool in a litigator's arsenal and offer significant time savings, although of course the possibility of error and the importance of verification remain.

2. Privilege & Confidentiality

Legal advice privilege is a cornerstone of English law. It protects confidential communications between lawyer and client for the dominant purpose of giving/obtaining legal advice from disclosure to others.4 It has hit the legal headlines recently, with court decisions in both England and the United States on how privilege interacts with GenAI, and in particular the considerations arising from the use of public or open-source AI tools.

The core issue is confidentiality: if confidentiality is lost, privilege falls away with it. This creates an acute problem with public GenAI tools. The "human-like" responses of chatbots can feel like a conversation with a trusted adviser, but they are not. They cannot provide legally privileged advice and, where confidential information is shared with a public third-party GenAI tool, that confidentiality is almost certainly compromised.

The courts have been very clear on the risks of using public GenAI tools. In the recent decision in UK v Secretary of State for the Home Department,5 the Upper Tribunal held that uploading confidential documents into an open-source AI tool "is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and any such conduct might itself warrant referral to the SRA and should, in any event, be referred to the Information Commissioner's Office."6 The Courts and Tribunals Judiciary Guidance for Judicial Office Holders on using GenAI is equally clear, and includes the following warning: "Do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter information which is private or confidential. Any information that you input into a public AI chatbot should be seen as being published to all the world […] You should treat all public AI tools as being capable of making public anything entered into them."

Legal prompts raise similar concerns. Prompts and resulting outputs from GenAI tools are considered to be documents for the purposes of English litigation proceedings and may be disclosable unless privilege can be established. A prompt that reveals the nature or direction of might itself attract legal advice privilege, but only if it was created in the context of a confidential lawyer/client relationship and for the dominant purpose of obtaining legal advice. Prompts drafted on public AI tools are unlikely to be protected. Practitioners should also not be lulled into a false sense of security by the "prompt improver" features built into some systems. Prompts should be treated with the same discipline as any other document created in the course of legal work, with a careful eye kept on what they might reveal if disclosed.

3. AI hallucinations and court documents

An increasingly common example of GenAI hallucination in litigation is the inclusion of authorities in court documents that do not exist, or do not say what it is submitted that they say. The courts are clear that it is incumbent upon lawyers to ensure the accuracy of their work and uphold the regulatory principles that apply to them.7 In UK v Secretary of State for the Home Department, the Upper Tribunal held that "any practitioner who uses non-specialist AI to undertake research or drafting is obliged to undertake rigorous checks to ensure that any information gleaned from those sources is true and accurate"; and confirmed that solicitors in supervisory or delegatory roles remain responsible for the supervision of work and its accuracy.

The consequences of getting this wrong are severe. The court's powers in the face of false or misleading information include public admonition of the legal professional, costs orders, striking out a case, referral to the regulator, contempt proceedings, and referral to the police.8

4. The CJC Interim Report and Consultation

In January 2025, the CJC formed a working group chaired by Lord Justice Birss, Deputy Head of Civil Justice, to examine the use of AI by legal representatives for the purposes of preparing court documents.9 The CJC has now published its interim report and consultation, considering "whether rules are needed to govern the use of AI by legal representatives for the preparation of court documents".10 

The CJC's core proposal is that declarations about AI use should be required where AI (particularly GenAI) has been used to generate evidence on which the court is asked to rely. In particular, in relation to trial witness statements, the CJC has suggested the inclusion of a declaration in the witness statement which confirms that AI has not been used to generate or rephrase substantive evidence. For expert evidence, the CJC's proposal is that the current standard form statement of truth be expanded in relation to the use of AI. This would require experts to explain what use of AI (and which tools), if any, had been made in the report, save for where used for translation or administrative purposes.

The consultation closes on 14 April 2026, with a final report of recommendations to follow.

5. Practical steps to take now

The courts are recognising the rapid adoption of AI use in litigation, and they are adapting to it. So, what guardrails can you put in place to seek to manage use of AI so as to reap the benefits and minimise the challenges?

  • Train your team. Private practitioners, in-house lawyers and others working with them should be trained on the nature of AI systems and on the specific use cases for which the firm or company has approved AI use. Have appropriate policies and guardrails in place relevant for each specific practice area. Without this, proper scrutiny of the use of AI systems, and their outputs, cannot be expected and the potential for embarrassment and liability is substantial.
  • Use the right tools. Appropriately secure, closed, AI platforms should be used for client-sensitive work. Public tools should not be used for anything confidential and/or privileged.
  • Verify everything and be transparent. If GenAI has been used in the creation of work product, this should be made clear, and the output of any GenAI tool must always be checked. Ensure that quotations and principles drawn from cases identified by GenAI are accurate. Those in supervisory roles must ensure that those they supervise understand both the risks and benefits of GenAI use, and supervisors should be prepared to take responsibility for the accuracy of all work.
  • Privilege. Be aware of the requirements for privilege to apply in any given jurisdiction and understand when it can be lost. The baseline assumption should be that putting information of any kind into a public AI tool should be treated as publishing it to the world. Instead, make use of secure, proprietary AI tools, or tools in relation to which specific confidentiality agreements have been implemented.
  • Talk to your clients. AI governance in litigation is not just a law firm issue. Clients whose documents, evidence or expert materials are involved need to understand how AI is being used by the law firm, the safeguards in place, and what is and is not permissible AI use.

Other related White & Case publications:

1 Two-Thirds of UK Lawyers Use AI | 2025 | LexisNexis Newsroom.
2 Artificial Intelligence (AI) – Judicial Guidance (October 2025) - Courts and Tribunals Judiciary.
3 "Hallucinations" being AI-generated content that appears to be legally correct but is not (e.g., in the litigation context, fabricated case law or procedural rules, incorrect quotations from judgments, etc.). See, for example, R (on the application of Ayinde) v London Borough of Haringey [2025] EWHC 1040 (Admin), and the more recent case of UK v Secretary of State for the Home Department [2026] UKUT 81.
4 This alert does not consider litigation privilege in detail. For further detail on AI and privilege, see our previous publication: AI in the Boardroom: privilege and recording decisions | White & Case LLP.
5 [2026] UKUT 81.
6 Supra., at [60].
7 See section VI of the Courts and Tribunal Judiciary Guidance. See also, the SRA Code of Conduct for Solicitors at 1.4 and 2.4, by which solicitors are required to: (i) not mislead or attempt to mislead the court or their clients whether by act, omission or complicity in the acts or omissions of others; and (ii) only make assertions or put forward statements, representations or submissions to the court or others which are properly arguable, respectively.
8 See 2.17 of the CJC Interim Report and Consultation: "Use of AI for Preparing Court Documents".
9 Use of AI in preparing court documents - Courts and Tribunals Judiciary.
10 Use of AI for Preparing Court Documents: Interim Report and Consultation.

White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.

This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.

© 2026 White & Case LLP

Top