As a judge deeply involved in legal technology, I've frequently used the phrase "keep the human in the loop" when discussing AI adoption in the law. It's become something of a catchphrase in legal tech circles - a seemingly simple solution to the complex challenges posed by artificial intelligence in law. However, I've grown increasingly concerned about how this phrase is being interpreted and applied.
The problem isn't with AI technology itself, but rather with the oversimplification of the "human in the loop" concept. When technologists and AI enthusiasts champion this as a universal solution, they often suggest that any human oversight is sufficient. This dangerous oversimplification ignores crucial factors: the quality and experience level of that human oversight.
This distinction becomes particularly critical when we consider how AI is increasingly being promoted for use in drafting legal documents, including judicial opinions. The common refrain is that as long as a human reviews and approves the AI's output, we're on solid ground. But this view fundamentally misunderstands both the nature of legal expertise and the institutional role of judges in our legal system.
There's immense value in what I call the "blank page exercise" - the process where legal professionals must think through issues from scratch, wrestle with complex legal concepts, and develop their own analytical frameworks. When we allow AI to consistently provide the first draft to inexperienced practitioners, we risk creating a generation of lawyers and judges who become editors rather than authors of legal thought.
This concern extends beyond theoretical implications. When an inexperienced legal professional relies heavily on AI-generated first drafts, we fundamentally invert the traditional mentorship model. Instead of a senior partner or experienced judge guiding a junior colleague, artificial intelligence sets the initial direction, with the less experienced human serving merely as a reviewer. The AI effectively becomes the de facto senior partner, with the human playing the role of junior associate. This inverted relationship threatens the traditional development of legal expertise and judgment, especially if the LLM being used has not been designed for this use case.
For judges, this concern carries even greater weight. Judicial opinions aren't merely resolutions of individual disputes - they can form the building blocks of precedent that shape future legal decisions. When judges rely on AI to draft opinions without the expertise to properly guide and evaluate its output, we risk diluting the authentic judicial voice that should reflect years of legal experience and careful consideration. The Judges have been elected or appointed to serve as the voice of the judiciary, not GenAI. The potential long-term effects on legal precedent could be profound if AI consistently shapes the initial framework of judicial reasoning.
The level of expertise of the "human in the loop" dramatically affects how AI should be used in legal practice. A seasoned judge or lawyer, drawing on years of experience and understanding of AI tools, can effectively use LLMs as a supplementary tool, critically evaluating and guiding the output based on their deep knowledge of legal principles and practical implications. However, for those earlier in their careers, heavy reliance on AI-generated first drafts could impede the development of crucial analytical skills and professional judgment.
This isn't about resisting technological progress - it's about understanding that not all human oversight is equal, and some legal roles demand more than just oversight. The phrase "human in the loop" suggests a binary choice - either there's human oversight or there isn't. But the reality demands a more nuanced approach that considers the experience level of the human, their familiarity with AI tools, the nature of the task, the institutional role of the reviewer, and the potential impact on professional development and legal precedent.
As we continue to integrate AI into the justice system, we need to move beyond simplistic catchphrases and develop more sophisticated frameworks for AI usage that account for experience levels, complexity of legal issues, and the critical need for developing deep legal expertise. For judges particularly, these frameworks must recognize their unique role as creators of precedent.
Let me be clear: this isn't about opposing AI in legal practice. When properly prompted by experienced legal professionals who understand both the law and AI capabilities, LLMs can be remarkably powerful tools and tremendous time-savers, even for initial drafts. The key is ensuring that the human directing the AI brings the right level of expertise and authority to meaningfully guide and validate its contributions. That's what "human in the loop" should really mean.
For more posts like this, visit www.JudgeSchlegel.com.
The trouble will be, that as time goes by, fewer and fewer legal professionals will be experienced at what you call "blank page exercises", and younger professionals coming on will have less ability to "think through issues from scratch and wrestle with complex legal concepts and develop their own analytical frameworks" due to their greater use of AI in their earlier years. In time, even the experienced judges will have relied on AI most of their lives and not have developed the skills that experienced that legal professionals have today.
As a teacher of mathematics, I have noticed over the years, the greater reliance on calculators by students, and also by teachers ... just teaching the students how to use the calculator for arithmetic and not teaching, practicing and honing arithmetic skills ... leading, in the long term to not only a general lack of arithmetic skills, but also to a lack of understanding of what is really going on when, for example, you convert mixed numbers to improper fractions ... it is so easily done just by the push of a button on a calculator now that many students have no real idea of what is going on. And, no idea of whether the answer produced is a sensible one. A similar thing results with using computer programs to graph equations without the prior practice at doing it oneself manually; there is a total lack of understanding by the student of how the line produced relates to the various parts of the equation. The flow on effects of all this when trying to do maths at higher levels are enormous, leading to what I would call a great 'mathematical illiteracy' in the general population of school students and even those going on to university. For example, a generation ago there were no 'foundation courses' or 'pre-' courses, as university entrants were expected to and did usually have the necessary skills, while many of today's students must take extra courses to catch up to the level they were supposed to be at before they can begin their degrees proper.
My prediction is that reliance on AI too much will cause a 'great atrophying' of the minds, or perhaps not atrophying, but rather a lack of proper development and training of the minds of the next generation of legal professionals.