Discussion about this post

User's avatar
Jeffrey w. Gettleman's avatar

Thank you for your thought-provoking post. But I would say not necessarily. It is true that we “can’t peer into an AI’s mind”; but we also can’t peer into a judge’s mind. The only way we know what a judge based a ruling or decision on is from what that judge says in open court or a written decision. With the appropriate prompting a LLM will give its reasons, just like a judge, which an appellate court could evaluate the same as a ruling by a human judge. It is also true that we don’t know what biases a LLM may have picked up from its training; but we don’t know what biases a human judge may have picked up throughout their life. An appellate court must take a human judge’s possible biases into account, but they only have the ruling in front of them, not the entire history of the judge’s life. Similarly, if a LLM, having written a ruling and explained its reasoning, and the factors that may have been influential or persuasive, the appellate court has the same sort of record it would have with a human judge (depending on the prompts given the “trial LLM” the record might even be more fulsome than some human judges’ decisions). In fact, the LLM could be asked whether it took any biases into account when rendering its decision — which I presume most human judges would consider improper. “AI judges” are not for everyone, and the use of LLMs in jurisprudence could develop in different ways. I believe one of those ways would be treating it sort of like a federal magistrate judge, who with the parties’ consent can enter a final order. The parties would trade off the human judge for a quicker process; perhaps they’d be required to waive the right to appellate review.

Expand full comment
1 more comment...

No posts