17 March 2026

Generative AI and legal privilege: lessons for Australian organisations from United States v Heppner

Robert Johnston, Robert Wyld, Brendan Donohue, Jan Hards
Abstract representation of dynamic digital data streams in vibrant blue and purple hues, capturing artificial intelligence concepts with intricate patterns and complex technological networks.

The challenge (and promise) of generative AI continues apace – including for lawyers, clients and courts, particularly in the context of privileged communications. While privilege is “an important common law right or, perhaps, more accurately, an important common law immunity”,[1] where conduct occurs that is inconsistent with privilege, legal professional privilege can be lost. And it is here that the use of generative AI can (and has) led to challenges.

ChatGPT launched in 2022 and, approximately four years later, the US District Court for the Southern District of New York recently held that a defendant’s written exchanges with a publicly available generative AI platform (Anthropic’s “Claude”) were not protected by attorney–client privilege or the work product privilege doctrine under US law, even though the outputs were later shared with his lawyers.[2]

In doing so, the Court stated that it was unaware of, and the parties did not identify, any published decision which had previously presented this issue.[3]

The Court’s reasoning turned on orthodox concepts—confidentiality, the lawyer–client relationship, and whether the work was created by or at the direction of counsel—and, although decided in a different jurisdiction, provides timely guidance for Australian boards, executives and in‑house teams using AI in disputes, investigations, regulatory matters and high‑stakes transactions.

What happened in Heppner (US)

In United States v Heppner (S.D.N.Y., 17 February 2026), the client defendant used the public AI platform “Claude” to generate written material analysing legal issues and defence strategy after he became the target of a criminal investigation. He did so on his own initiative – i.e. without any “suggestion” or “direction” from his lawyers that he do so.[4] When the Government sought access to those materials, the defendant claimed attorney–client privilege and work product privilege protection. Judge Rakoff rejected both claims.[2]

Why attorney-client privilege failed

The Court held that the AI exchanges were not communications between a client and a lawyer or a lawyer’s agent and, critically, were not confidential. 

First, the Court noted that Claude is not a lawyer, stating that it lacked the “trusting human relationship” and fiduciary duties which arise in the lawyer-client context.[5]

Second, the Court pointed to the platform’s terms of use, which allowed the provider to retain and analyse prompts and outputs, use them for model training and disclose information to third parties including regulators or in connection with disputes or litigation. Those terms undermined any reasonable expectation that the exchanges, between the defendant and Claude in the context of the defendant’s criminal proceeding, were confidential. The Court noted that, because they were first shared with a third-party non-lawyer (Claude), the defendant’s exchanges with Claude were “not like confidential notes that a client prepares with the intent of sharing them” with a lawyer.[6]

The Court also reiterated the orthodox principle that forwarding non‑privileged material to counsel after it is created does not retrospectively attract privilege to otherwise non-privileged material.[2] This is aside from whatever communications were sent by the client to a lawyer for the dominant purpose of legal advice, which can be legitimately protected by privilege. 

The Court noted, however, that had a lawyer directed the client defendant to use Claude, Claude “might arguably be said to have functioned” as a trained professional who was acting as a lawyer’s agent, which, if done properly, can fall within the protection of legal professional privilege.[7]

Why work product privilege failed

Although the Court accepted that litigation was reasonably anticipated at the time the materials were created, that fact alone was insufficient. The work product privilege doctrine did not apply because the materials were not prepared by or at the direction of counsel and did not reflect counsel’s mental impressions or strategy. Satisfying the "anticipation of litigation" limb was not, on its own, sufficient.[2]

A lack of confidentiality

A critical part of any privilege claim is founded on the principle that a communication (between lawyer, client or a third party) must be confidential. The Court held that users of AI platforms do not have “substantial privacy interests in their conversations with [another publicly accessible AI platform] which users voluntarily disclosed to the platform and which the platform retains in the normal course of its business” (hence the importance of platform terms of use and/or privacy policies). In this respect, the Court applied an earlier judgment (delivered in January 2026) in the hotly contested disclosure fight in the OpenAI litigation (in the same court)[8], resulting in Judge Rakoff holding that the client defendant “could have had no reasonable expectation of confidentiality in his communications with Claude”. Absent confidentiality, it is hard to see any privilege claim properly arising.

Legal professional privilege in Australian law

Australian law protects legal professional privilege at common law and, in many courts, through the statutory client legal privilege provisions in the Evidence Act 1995 (Cth) and corresponding State legislation when communications are sought to be put into evidence before a court or tribunal. The central test is whether the communication or material or document was created for the dominant purpose of obtaining or giving legal advice, or for use in existing or reasonably anticipated Australian or overseas litigation.[9] That principle was confirmed by the High Court in Esso Australia Resources Ltd v Commissioner of Taxation, which established the dominant purpose test in Australian law.[10]

Confidentiality is equally fundamental. Privilege will not arise unless the communication was intended to be confidential and that confidentiality is maintained. In Mann v Carnell, the High Court emphasised that privilege can be lost where a client, or the client’s agent, engages in conduct that is inconsistent with maintaining confidentiality.[11]

While Australian courts have not yet ruled on AI‑specific privilege issues, the reasoning in Heppner highlights how privilege claims could fail where AI use undermines confidentiality or occurs outside lawyer‑directed workflows. Where material is entered into AI systems whose terms permit retention, analysis or disclosure of prompts or outputs, those conditions could well be found to undermine the intended confidentiality required for claiming privilege.

Why this matters in Australia

The reasoning in Heppner is likely to be consistent with the approach of the Australian courts because it reflects orthodox privilege principles already embedded in Australian law.

Three aspects are particularly relevant to Australian practice:

  • First, privilege depends on the dominant purpose for which a communication is created. Internal use of generative AI by employees to analyse legal risk or develop legal arguments may not attract privilege unless that work is undertaken for the dominant purpose of obtaining legal advice or under the direction of lawyers for that dominant purpose. The law is clear: no objective dominant purpose means no privilege.
  • Second, privilege requires confidentiality to be maintained. An AI platform’s terms – particularly whether they permit retention, analysis, or disclosure of prompts and outputs, or general disclosure to third parties on whatever basis – are a critical consideration. The entry of privileged information into such systems risks losing confidentiality in those materials and therefore jeopardises any future privilege claim. Alternatively, it may simply result in any privilege over the document being waived.
  • Third, source material created before lawyers become involved will generally not become privileged retrospectively simply because it is later shared with counsel.
Key takeaways for Australian organisations
  • Public or consumer AI tools (including tools with terms allowing the provider to access data and use it for AI model training) could well undermine the intended confidentiality required for privilege claims depending on their data handling terms.
  • Involving lawyers only after AI material is generated risks any legitimate retrospective claim of privilege.
  • The involvement of lawyers in a matter is a relevant but not determinative factor in establishing privilege. Where non-lawyers use AI (including enterprise‑grade AI tools) to formulate legal defences, assess legal risk or check regulatory obligations, without such analysis having been directed by a lawyer for the dominant purpose of obtaining legal advice from a lawyer, the resulting communications are unlikely to be protected by privilege. Furthermore, disclosure of privileged material to an AI platform risks resulting in a waiver of that privilege.
Practical guidance
What you should do
  • You should treat disputes, investigations and regulatory matters as high‑risk contexts for AI use, which require legal oversight.
  • The use of enterprise‑grade AI tools with robust confidentiality commitments – while recognising that this addresses confidentiality risk but does not, of itself, establish the dominant purpose necessary for privilege – is a key step in addressing the confidentiality issue of public AI platforms.
  • It is critical to document the dominant legal purpose of AI‑assisted work, instruct internal or external lawyers to manage any AI-related activities and train employees on what information must never be entered into AI systems (such as the substance of legal advice).
What you should not do
  • Do not use public or consumer AI tools to analyse any specific legal exposure, disputes or investigations.
  • Do not input confidential or sensitive information into AI tools without safeguards and approval.
  • Do not assume that providing AI‑generated material to lawyers later will make it privileged.
  • Do not allow unsupervised AI use in circumstances where privilege may be critical.

If you have any questions or comments about generative AI and legal privilege, please contact us

References