April 2, 2026

Risks of Using an AI Chatbot for Legal Advice: Lessons from United States v. Heppner

Imagine that you are an executive (who is not a lawyer) and are concerned about what your company plans to  do is legal. You could call your lawyer who might bill you for the call.  Or, you can ask your AI chatbot, such as Claude or ChatGPT, about the legal risk. The chatbot will likely compliment you on the incisive question, provide you with highly confident answer (that may or may not be right) and will not bill you on an hourly basis.

That is essentially what financial services executive Bradley Heppner did. It did not end well. A federal court recently ruled that Heppner’s chats with the AI tool Claude were not protected by attorney-client privilege or the work-product doctrine. That means that the other side (in this case, the federal government) could get access to his chatbot prompts, uploads and responses, and learn a great deal about, for example, whether Heppner knew what he was doing was illegal.

The court ruled that AI-generated communications and documents do not deserve any legal shield.[i] Heppner, who was charged with securities fraud and other federal crimes, had used Claude to draft and analyze documents related to his case. He even fed Claude information he received from his actual lawyers. The court ruled that these AI chats were not confidential, were not communications with counsel, and were not created for the purpose of obtaining legal advice. Therefore, the federal prosecutors could review them and use them in the case.

Conversations with a chatbot are generally not confidential under the terms of the most popular AI models. The Heppner ruling indicates that they are discoverable in litigation. Moreover, if you feed your attorney’s recommendations into a chatbot  to get a “second opinion” then you have likely destroyed the privilege that would otherwise protect the attorney’s communications with you. 

This risk applies not just to criminal matters, but also to civil lawsuits (such as employment law claims and business disputes). For example, if non-lawyer employees use an AI tool in connection with workplace investigations, handling employment complaints, or in preparation for litigation, they may be creating records that adversaries in litigation can obtain and use against you.

If you want to use AI tools for sensitive legal matters, at least do so under your lawyer’s supervision. The court in Heppner hinted that if Heppner’s lawyers had managed Heppner’s use of the AI chatbot, the result might be different.

AI tools can be tremendously useful in business operations, but as the Heppner decision makes clear, using them for legal questions, without guidance, can create significant and unnecessary risk. Before your company or its employees turn to a chatbot for legal guidance, talk to a real lawyer first. Contact your Payne & Fears attorney to discuss how to develop sensible AI-use guidelines that protect your organization’s privileged communications and minimize litigation exposure.


[i] The court’s minute order ruling in Heppner is fact dependent.  In its most narrow form, the ruling is that a party’s conversations with a publicly available, non-enterprise, generative AI platform, which were not made at the request of counsel, are not protected by attorney-client privilege or the work product doctrine.  United States v. Heppner, 2026 U.S. Dist. LEXIS 32697 Feb 17, 2026.