How Lawyers Can Use ChatGPT Without Violating Attorney-Client Privilege
I've been watching attorneys get sanctioned for AI misuse for two years now, and the thing that strikes me is how the conversation keeps focusing on the wrong problem.
Everyone talks about hallucinations. Fake citations. Lawyers who submitted briefs full of cases that don't exist. And yes, that's bad. 886 documented cases bad, according to researcher Damien Charlotin's tracker, with new ones showing up daily. But hallucinations are the visible failure. They get caught because judges look up the citations and can't find them.
The confidentiality failures? Those are invisible. When a lawyer pastes client details into ChatGPT and OpenAI retains that data for 30 days, there's no judge catching that. No sanctions order. No headline.
A Stanford RegLab analysis found that three out of four lawyers plan to use generative AI in their practice, and that general-purpose LLMs hallucinate on legal queries somewhere between 58% and 88% of the time. The ABA has formally stated that using AI is within a lawyer's scope of practice. So the profession is hurtling toward adoption while the guardrails for the hardest problem (protecting client data) barely exist for individual practitioners.
That tension is what this piece is about.
The ABA Weighs In: Formal Opinion 512
On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility released Formal Opinion 512, the first comprehensive national guidance on lawyers using generative AI. If you haven't read the full 15 pages, the short version is that it maps AI use onto six existing obligations under the Model Rules.
Competence (Model Rule 1.1) requires a reasonable, current understanding of what AI tools can and can't do. Not a computer science degree. But enough to know that ChatGPT confidently generates fake case law, that your data may be used for training, and that "I didn't know the AI could do that" stopped being an excuse about two years ago. This is an ongoing obligation, which matters because these tools change constantly.
Confidentiality (Model Rule 1.6) is where things get uncomfortable. The opinion applies the same third-party service provider analysis used for cloud storage and e-discovery vendors: you must make "reasonable efforts" to prevent unauthorized disclosure. That means understanding whether the AI provider uses your input for training, how long data is retained, and who can access it. We'll come back to this one.
Communication (Model Rule 1.4) may require disclosing AI use to clients, depending on circumstances. The opinion doesn't create a blanket requirement, but it strongly implies that when client data goes to a third-party AI service, informed consent is probably necessary.
Candor toward the tribunal (Model Rules 3.1, 3.3, 8.4(c)) means you personally own every citation and factual assertion in your filings. Full stop. The sanctions cases have made this obligation very concrete.
Supervisory responsibilities (Model Rules 5.1, 5.3) require managing attorneys to establish clear AI policies and ensure compliance. Several of the biggest sanctions cases involved partners who had no idea their associates were using ChatGPT. Which brings us to those cases.
Reasonable fees (Model Rule 1.5) presents an interesting wrinkle. If AI lets you draft a memo in 20 minutes that used to take three hours, you can't bill three hours. You also shouldn't charge clients for time spent learning AI tools that benefit your whole practice. You may, however, be able to pass through AI tool costs on a per-use basis if you disclose this upfront.
The opinion's conclusion is measured but clear: lawyers can use AI, but they own the consequences entirely.
The Sanctions Docket Is Growing Fast
The headline cases are worth knowing in detail, because each one teaches something slightly different.
Mata v. Avianca (S.D.N.Y., June 2023) started it all. Steven Schwartz used ChatGPT to research a personal injury case against the airline and submitted a brief containing six nonexistent cases. When the court asked for copies of the cited decisions, Schwartz went back to ChatGPT, which assured him the cases were real. Judge P. Kevin Castel imposed a $5,000 fine on Schwartz, co-counsel Peter LoDuca, and the firm of Levidow, Levidow & Oberman jointly. LoDuca's role is particularly instructive: he had reviewed the filing for "style" and "flow" but never checked whether the cases existed.
Smith v. Farwell (Massachusetts Superior Court, February 2024) introduced the supervisory angle more explicitly. The supervising attorney didn't know his subordinates had used AI. He'd reviewed their work for style and grammar but not for the accuracy of case citations. The court sanctioned the firm and issued a warning to the Massachusetts bar at large. (The case is discussed in a Maryland State Bar article and cites Mata extensively, but the broader point landed: if you're signing the filing, you're vouching for everything in it.)
Noland v. Land of the Free (Cal. Ct. App., 2nd Dist., September 2025) is the one I keep thinking about. Attorney Amir Mostafavi used ChatGPT, Claude, Gemini, and Grok to draft appellate briefs, and the court found that 21 of 23 quotations in his opening brief were fabricated. The $10,000 sanction was the largest ever issued by a California state court for AI-related misconduct. The court referred Mostafavi to the State Bar. But here's the part that should concern every litigator: the court also declined to award fees to opposing counsel, partly because they hadn't detected the fake citations either and hadn't alerted the court. The implication is hard to miss. Spotting AI hallucinations in your opponent's filings may be heading toward an expected professional competence. California's first published appellate opinion on AI hallucinations, and it cuts in both directions.
Wadsworth v. Walmart Inc. (D. Wyo., February 2025) proved that scale doesn't protect you. Attorney Rudwin Ayala from Morgan & Morgan (the 42nd largest U.S. law firm by headcount) used the firm's internal AI platform, MX2.law, to generate case law for a motion in limine. Eight of the nine cited cases were nonexistent. Ayala was fined $3,000 and removed from the case. Co-signers T. Michael Morgan and Taly Goody were fined $1,000 each. The court noted favorably that Morgan & Morgan had since implemented additional verification requirements, which mitigated the sanctions. But the reputational damage of "America's largest injury law firm cites fake cases" is harder to walk back.
These are all hallucination cases. But I want you to notice something about the underlying behavior: in every one, lawyers put real client facts into AI tools and submitted the output without adequate scrutiny. If they weren't thinking about whether the citations were real, they almost certainly weren't thinking about whether the client data they fed into ChatGPT was being retained, used for training, or accessible to OpenAI employees.
The hallucination problem gets you sanctioned. The confidentiality problem can get you disbarred. And nobody's tracking those cases because they don't produce visible errors in court filings.
The Confidentiality Gap Nobody Talks About
Model Rule 1.6 requires "reasonable efforts" to prevent unauthorized disclosure of client information. When you type a client scenario into ChatGPT, you're transmitting information to OpenAI. Even without names. A prompt like "my client is a mid-size pharmaceutical company facing an FDA warning letter about contamination at their New Jersey facility" contains enough specifics that the client could be identified through public records.
What happens to that data depends on what you're paying for.
On ChatGPT's free and Plus tiers, your conversations can be used to train future models by default. You can opt out in settings, but OpenAI still retains conversations for up to 30 days for abuse monitoring. Temporary Chat mode? Also retained for 30 days. There's no way for an individual user on consumer plans to achieve zero retention.
ChatGPT Team and Enterprise plans offer stronger protections: no training on your data, better contractual commitments. But these require organizational procurement. A solo practitioner or small-firm attorney can't just sign up.
Claude, Gemini, and other tools follow similar patterns. Consumer plans have weaker protections. Business and enterprise plans are stronger. The gap between what's available to a Big Law associate with a firm-procured license and a solo practitioner handling a custody case is enormous, even though they share identical confidentiality obligations under Rule 1.6.
This is a structural problem. A solo practitioner handling sensitive family law matters has the same ethical duty as a firm with a dedicated AI governance team. The ABA opinion doesn't distinguish between them. The tools do.
And the "just turn off chat history" advice, which I've seen in at least a dozen CLE presentations? That doesn't solve the retention problem. It stops your data from being used for training (on most platforms). It doesn't stop the provider from holding your data for 30 days. For a lawyer whose client's privileged communications just sat on OpenAI's servers for a month, that distinction matters.
One Way to Close the Gap
The cleanest solution to the confidentiality problem is architectural: make sure client data never reaches the AI provider in the first place.
That's the approach behind TrustPrompt. Before your prompt leaves your browser, it strips out names, financial figures, company names, dates, and other identifying information and replaces them with realistic placeholders. The AI processes a sanitized version. When the response comes back, the placeholders get mapped back to your originals. The AI never learns that your client, your opposing party, or the specific dollar amounts exist.
To make this concrete: you write a prompt like "Draft a demand letter from Sarah Chen at Meridian Partners to opposing counsel at Blackwell & Associates regarding the breach of the November 2024 licensing agreement with a damages claim of $2.3 million." The AI receives something like "Draft a demand letter from James Ward at Greenfield Group to opposing counsel at Hawthorn Legal regarding the breach of the March 2023 distribution agreement with a damages claim of $1.7 million." You get a usable draft back with your real details restored. OpenAI or Anthropic or Google never saw your client's actual information.
All of this processing happens locally in the browser. There's no intermediary server that sees your original text. This is what zero-knowledge architecture means in practice: the tool operates without ever having access to the data it's protecting.
For lawyers working under Rule 1.6, this matters because it shifts the confidentiality analysis entirely. You're not relying on a contractual promise from the AI provider to not misuse your data (a promise that varies by tier, changes over time, and is controlled by a company whose core business incentive runs in the opposite direction). You're ensuring the data never gets there. That's a fundamentally different posture when it comes to "reasonable efforts."
Practical Steps Regardless of What Tools You Use
Whether or not you adopt a sanitization approach, there are things every attorney using AI should be doing right now.
Sort your work by sensitivity. Not every AI task carries the same risk. General legal research, drafting templates, explaining concepts in plain language, brainstorming arguments for hypothetical fact patterns: these involve no client information and can use consumer AI tools safely. But anything involving client names, case-specific facts, financial details, medical records, settlement terms, or litigation strategy needs real data protection. Most legal work falls in the second category. Be honest about that.
Know what your tools do with data. Before using any AI tool for client work, you should be able to answer: Is my input used for training? How long is it retained? Who at the provider can access my conversations? What subprocessors handle my data? Can I get a Data Processing Agreement? If you can't answer these, you haven't met the competence requirement under Rule 1.1 for using that tool with client information.
Verify everything. This should be obvious after 886 sanctions cases, but it keeps happening. AI output needs the same scrutiny you'd give work from a first-year associate who hasn't quite internalized that confidence and accuracy are different things. Verify every citation. Read the cases, not just the names. Confirm they support the specific proposition you're citing them for. The Johnson v. Dunn court (N.D. Ala., July 2025) made clear that even citing a real case isn't enough if the case doesn't actually support the argument you're making.
Write it down. You need a written AI use policy. Solo practitioners too. It should cover which tools are approved, what data can go into them, what sanitization is required for sensitive work, who verifies AI output, and how AI use is disclosed to clients and courts. Multiple jurisdictions now have standing orders requiring AI disclosure in filings. Check your local rules. A documented policy also protects you when things go wrong. The court in Wadsworth v. Walmart specifically noted Morgan & Morgan's "responsible attitude toward generative AI" as a mitigating factor in declining harsher sanctions.
Talk to your clients. Formal Opinion 512 says informed consent may be necessary depending on circumstances. Rather than treating this as compliance overhead, consider what it looks like from the client's perspective. They probably know you're using AI. They want to know you're using it responsibly. A short paragraph in your engagement letter covering AI use, data protection measures, and the client's option to opt out is increasingly standard at firms that are thinking about this proactively.
Where This Is Heading
Damien Charlotin, who maintains the AI hallucination tracker from his research post at HEC Paris, told the press he went from seeing a few cases per month when he started tracking to a few cases per day. The total is at 886 documented cases globally and climbing. Courts are moving from "this is a novel issue" to "there is no excuse for this anymore." The Johnson v. Dunn court in Alabama explicitly stated that monetary sanctions alone aren't deterring the behavior and that something more is needed.
Meanwhile, the Stanford RegLab research found that even purpose-built legal AI tools from LexisNexis and Thomson Reuters (products that use retrieval-augmented generation and claim to minimize hallucinations) still produce inaccurate results more than 17% of the time. General-purpose chatbots are far worse. The technology is getting better, but the gap between "useful drafting tool" and "reliable legal research authority" remains wide.
The attorneys who are going to navigate this well aren't the ones avoiding AI (they'll fall behind and may actually be violating their duty of technological competence under Rule 1.1). They're the ones who treat AI like they'd treat any other powerful, flawed tool: with clear procedures, appropriate skepticism, and a structure that protects clients even when the tool misbehaves.
That means understanding ABA Formal Opinion 512 not as a set of restrictions but as what it actually is: a roadmap. The ethics rules tell you what "responsible AI use" means in specific, enforceable terms. The sanctions cases tell you what happens when you skip the steps. The gap between those two things is where your practice lives.
TrustPrompt removes client names, financial details, and other sensitive data from your prompts before they leave your browser. The AI never sees your confidential information. Try it free at trustprompt.io
Related Reading:
- Why Companies Are Banning ChatGPT (And How to Use AI Safely Anyway)
- The Complete Guide to Using AI at Work When Your Company Has Banned It
- ChatGPT's New Privacy Policy: What the Ads Update Actually Means for Your Data
- How Zero-Knowledge Architecture Protects Your AI Prompts (coming soon)
TrustPrompt sanitizes your AI prompts before they leave your browser. All PII detection happens locally — nothing sensitive ever reaches the AI.
Try TrustPrompt Free