π€ Interesting Development in Legal AI Disclosure
While I've argued against courts mandating special orders for AI disclosure in legal work, it turns out disclosure might be required anyway β not by courts, but by the AI providers themselves.
Case in point: Anthropic (creator of Claude) classifies legal work as a "high-risk" use case in their updated policy. Using their AI for legal work requires:
- Disclosure to end users
- Human-in-the-loop review
- Qualified professional oversight
So even as judges and courts debate whether to require AI disclosure, the AI companies are quietly setting their own standards.
It's a fascinating example of how technology providers may help shape legal practice, sometimes in unexpected ways.
Thoughts? How do you see AI providers' policies influencing legal practice standards?
See Claudeβs Usage Policy here: https://www.anthropic.com/legal/aup
See more posts like this at www.judgeschlegel.com/blog
#LegalTech #AI #LawFirm #Courts #Innovation #LegalInnovation