Paste Now, Regret Later? AI And the Privilege Problem
Can you seek legal advice from AI?
Will your conversations with AI be privileged such that disclosure of it can be avoided?
The following two recent cases provide practical lessons and help determine the potential risks to privilege where AI is used to obtain legal advice.
What is Legal professional privilege?
Legal professional privilege in the UK is a fundamental principle that protects confidential communications between clients and their lawyers from disclosure. This means such documents can be withheld from third parties subject to exceptions.
For a communication to be protected by legal privilege under English law, it must be and remain confidential. However, for the protection of legal privilege to apply the communications needs to be between a client and a lawyer.
United States v Heppner 25 Cr.503 (JSR)
In a first of its kind decision, on 10th February 2026, the U.S. District Court of the Southern District of New York ruled that documents created by a client using publicly available generative AI tool were not covered by attorney-client privilege or work product privilege and are therefore not protected from disclosure.
The defendant, Heppner had been charged with securities and wire fraud, and federal agents executed search warrants at his home and seized some materials and documents. These included 31 documents that memorialised communications that Heppner had with the publicly available generative AI platform – Claude, after the grand jury subpoena. These documents outlined defence strategy, as to what the accused might argue with respect to the facts and the law that was anticipated the government might be changing.
Heppner claimed privilege over these documents and argued that they had been created for the purpose of speaking with Counsel and subsequently shared.
The judge disagreed and held:
- The communications were not between a client and his lawyer
- Heppner did not communicate with Claude for the purpose of obtaining legal advice. He did so on his own, and not on request of his lawyers. Intention to share it with the lawyers does not make the communications privileged.
- The communications were not confidential as Claude’s written policy provides that Anthropic uses customers inputs and Claude’s outputs and discloses it to third parties, including government regulators.
The judge concluded by saying at (p. 12):
“Generative artificial intelligence presents a new frontier in the ongoing dialogue between technology and the law. Time will tell whether, as in the case of other technological advances, generative artificial intelligence will fulfil its promise to revolutionise the way we process information. But AI’s novelty does not mean that its use is not subject to longstanding legal principles, such as those governing the attorney-client privilege …”
The judge further concluded that ‘the AI documents lack at least two, if not all three elements of the attorney client privilege’ and noted that Claude is not a lawyer and that alone disposes of the accused’s claim of privilege.
UK v Secretary of State for Home Department [2026] UKUT 81 (IAC)
For the first time an English court, the Upper Tribunal (Immigration and Asylum Chamber) has issued a decision commenting on the risk of losing legal privilege when using AI tools.
In this matter, the Upper Tribunal dealt with two cases together which were primarily concerned with legal representatives citing ‘hallucinated’ authorities.
However, the tribunal commented on the implications for confidentiality of using freely available AI tools remarking – ‘to put client letters into an open AI source tool, such as Chat GPT, is to place that information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and thus any regulated legal professional or firm that does so would, in addition to needing to bring this to the attention of their regulator, be advices to consult with the Information Commissioner’s office. Closed source AI tools which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarising without these risks.’
Recent developments in the UK
Recently, the UK Joint Taskforce has published a draft statement on Public Consultation – Liability for AI harms under the private law of England and Wales. The role of UKJT statements is to explain how the common law is likely to deal with private law problems especially professional negligence, thrown up by new technology, in order to help increase certainty in a rapidly changing technological landscape.
Further, the Bar Council’s updated Guidance dated November 2025 entitled “Considerations when using ChatGPT and generative artificial intelligence software based on large language models” also suggested that barristers should be vigilant with sharing with a generative LLM system any legally privileged or confidential information, or personal data. Even as regards more bespoke systems, the Guidance warns (at [28]) that:
“Barristers need fully to understand how the tool they are using operates in this respect, including any relevant protective setting”.
In the light of the above, it is therefore crucial to consider potential risks of obtaining or relying upon legal advice from AI.
Hodge Jones and Allen Solicitors have recently published a new Generative AI in the workplace policy which covers the use of AI by HJA staff and the use of AI by third parties who the staff interact with in the course of their work.
Mandatory training for all staff has also been introduced firm wide to spread awareness about the significant risks including potentially serious regulatory and legal repercussions and reputational damage.
Our Dispute Resolution solicitors provide clear, strategic advice to help you resolve conflicts effectively. Contact us on 0330 822 3451 for more information or advice.
To read more about AI in the legal profession, refer to: Don’t Let a Bot Be Your Lawyer – Hodge Jones & Alle