Is your AI tool risking privilege waiver? Learn how third-party AI disclosure affects privilege and how to protect your clients.
The Privilege Problem No One’s Talking About
Your attorneys are probably using ChatGPT right now. According to recent surveys, over 60% of legal professionals have used generative AI for work. The question isn’t whether your firm is using AI—it’s whether that use is risking privilege waiver.
Here’s the uncomfortable reality: when you paste privileged information into a public AI service, you may have just disclosed it to a third party. And third-party disclosure is one of the fastest ways to waive attorney-client privilege.
How Privilege Waiver Happens
Attorney-client privilege protects confidential communications made for the purpose of obtaining legal advice. But privilege has a critical weakness: voluntary disclosure to a third party generally waives the privilege.
When you use ChatGPT or similar public AI services:
- Your prompt travels to OpenAI’s servers - That’s a third-party disclosure
- Multiple systems process your data - More potential disclosure points
- The data may be stored indefinitely - Extended exposure risk
- Terms of service may allow data use - Including for model training
- OpenAI employees may access conversations - Additional third parties
The traditional test for privilege waiver asks: Did the client take reasonable steps to preserve confidentiality? Using a public AI service with client information is hard to characterize as “reasonable.”
The Common Law Reality
Courts haven’t yet established bright-line rules for AI and privilege waiver, but the principles are clear:
Disclosure to third parties waives privilege unless:
- The third party is the attorney’s agent for providing legal services
- The disclosure was necessary and the client consented
- An applicable exception applies (common interest, crime-fraud, etc.)
Is OpenAI your agent for providing legal services? Is their data processing “necessary” for your representation? These are uncomfortable questions without clear answers.
What Public AI Actually Does With Your Data
Let’s look at what happens when you submit a prompt to a public AI service:
Data Transmission
Your Computer → Internet → Cloud Load Balancer → API Gateway →
Processing Servers → Model Inference → Response Generation →
Your Computer
At each step, data is processed, logged, and potentially stored. This isn’t a secure, confidential communication—it’s a broadcast across infrastructure you don’t control.
Storage and Retention
Public AI providers typically:
- Log all API requests and responses
- Store conversation history
- Retain data for abuse prevention and improvement
- May use data for model training (depending on plan)
Even “enterprise” versions of consumer AI tools have limitations. Read the fine print carefully.
Employee Access
Support staff, engineers, and security teams at AI providers may access your conversations for:
- Troubleshooting technical issues
- Investigating abuse reports
- Quality assurance
- Model improvement
Each person who accesses your data is another potential disclosure.
The “But It’s Just Research” Argument
Some attorneys rationalize: “I’m only using AI for general research, not client-specific matters.” This argument has problems:
- Context matters: Even “general” research often contains client-identifying details
- Pattern recognition: Multiple queries can reveal case strategy
- Habit formation: Today’s general query becomes tomorrow’s client-specific one
- No bright line: When does research become confidential?
The safer approach: assume everything you do for a client matter is potentially privileged, and protect it accordingly.
Enterprise AI Isn’t Automatically Safe
“We use ChatGPT Enterprise” is not a complete answer. Enterprise versions of consumer AI tools offer improvements, but they’re not purpose-built for legal confidentiality:
| Feature | Consumer AI | Enterprise AI | Legal-Specific AI |
|---|---|---|---|
| Data training | Often yes | Usually no | Contractually prohibited |
| Private network | No | Sometimes | Yes (VNet) |
| BAA available | No | Maybe | Yes |
| Audit logging | Limited | Yes | Legal-specific |
| DLP integration | No | Limited | Native |
| Privilege-aware | No | No | Yes |
Enterprise AI is better than consumer AI, but it’s still a general-purpose tool. Legal practice requires purpose-built solutions.
What Adequate Protection Looks Like
A privilege-preserving AI implementation requires:
1. Private Infrastructure
- Private endpoints: AI requests never cross the public internet
- Virtual network isolation: Your data stays in your security perimeter
- No multi-tenant processing: Your prompts processed in isolation
2. Contractual Protection
- Data Processing Agreement (DPA): Clear terms on how data is handled
- Business Associate Agreement (BAA): For matters involving PHI
- Training prohibition: Contractual bar on using your data for model training
- Deletion rights: Ability to require data removal
3. Technical Controls
- Encryption: TLS 1.3 in transit, AES-256 at rest
- Access controls: Only authorized personnel access AI features
- Data Loss Prevention: Rules blocking privilege markers from risky destinations
- Sensitivity labels: Automatic classification of privileged content
4. Audit Capability
- Complete logging: Every prompt and response documented
- User attribution: Who used AI, when, for what matter
- Retention: Logs preserved for compliance documentation
- Export: Ability to produce logs for ethics inquiries
The “Trust Architecture” for Legal AI
We call this the Trust Architecture:
Layer 1: PHI Never Leaves Your Environment All processing happens within your Microsoft 365 and Azure tenant. Data doesn’t touch third-party servers.
Layer 2: Human-in-the-Loop AI generates drafts. Attorneys review, edit, and approve. Nothing is used without human judgment.
Layer 3: Complete Audit Trail Every AI interaction is logged. When the bar asks how you used AI, you have documentation.
Layer 4: Minimum Necessary Data Only the data required for the task enters the AI system. No full document dumps or unnecessary identifiers.
Practical Steps to Protect Privilege
Immediate Actions
-
Audit current AI use: Survey your attorneys. What tools are they using? What data are they inputting?
-
Implement DLP: Microsoft 365 Data Loss Prevention can block privileged content from reaching public AI sites.
-
Block public AI: Network-level filtering prevents access to consumer AI services.
-
Establish clear policy: Written guidance on what AI tools are approved and what data is never to be input.
Medium-Term Actions
-
Deploy enterprise AI: Purpose-built solutions with proper agreements and controls.
-
Train your team: Ensure everyone understands privilege implications of AI use.
-
Update engagement letters: Inform clients about AI use and the protections you’ve implemented.
-
Document everything: Create audit trails that demonstrate reasonable precautions.
The Client Conversation
How do you discuss AI and privilege with clients?
Be proactive: Don’t wait for clients to ask. Address it in engagement letters and initial consultations.
Be transparent: Explain what AI tools you use, how they’re protected, and why you’ve chosen them.
Be confident: If you’ve implemented proper safeguards, AI is a benefit—not a risk—to the representation.
Sample client communication:
We use AI tools to enhance our efficiency in serving you. Our AI infrastructure is specifically designed for legal confidentiality: your data remains in our private environment, is never used to train AI models, and is protected by the same safeguards as all our confidential communications. All AI-assisted work is reviewed by your attorney before use.
Key Takeaways
- Privilege waiver through third-party disclosure is a real risk with public AI
- Consumer and even enterprise AI tools aren’t designed for privilege protection
- Private AI infrastructure with proper agreements is essential for legal use
- DLP, sensitivity labels, and access controls provide additional protection
- Documentation and audit trails support reasonable precaution defense
- Client communication about AI use builds trust and manages expectations
Protect Your Privilege
Attorney-client privilege is the foundation of legal practice. Don’t let the convenience of AI undermine centuries of confidentiality protection.
Free Privilege Risk Assessment: We’ll evaluate your current AI tools and data flows to identify privilege exposure points and recommend safeguards.
About the Author
Cloud Magic Technology Group is a leading IT services provider in the San Francisco Bay Area, helping companies modernize their technology infrastructure.