Be Careful What You Ask For: Potential Pitfalls of Using AI in the Legal World

Be Careful What You Ask For: Potential Pitfalls of Using AI in the Legal World

Written by Lee Hoyle, Esq.

Edited by Bill Pfund, Esq.

Artificial intelligence has gone from the realm of science fiction to a consumer product with the release of ChatGPT and other large language models. Proponents of AI have touted their ability to revolutionize any number of industries, including the legal profession. These optimistic claims may overlook potential pitfalls in using the new technology professionally, particularly in legal realm. One lawyer found out the hard way that AI might not be what it seems at first blush.

Before using AI in any professional capacity, an attorney must have a basic understanding how large language models work. AI typically refers to large language model programs. At the risk of oversimplification, these programs operate as highly refined predictive text generators. AI or large language model programs have gone from predicting the next word to predicting the correct response, even where the correct response is a sentence, a paragraph, or even pages of text. Critically, these programs have been trained to provide the correct response through processing and evaluating huge amounts of text. By training the programs on massive amounts of text, the programmers are able to teach the program to analyze questions from users and provide responses based on the training texts.

This reliance on training texts presents a potential problem for lawyers and other professionals in the legal field who are required to keep certain information confidential. Obtaining a useful response from AI might require providing key details specific to a given case. Giving those details might allow the AI to provide a more helpful response, but it might also unintentionally disclose confidential information. Furthermore, since large language models continue to undergo training based on new inputs, this disclosure might not simply place information into a computer program where no one will ever discover it. The information provided to the program might be used by the program or the program’s owner in ways that neither the professional nor the person whose information was disclosed could foresee. Any legal professional should carefully consider whether using commercially available AI programs to assist with legal strategy is consistent with the duty to maintain the confidentiality of certain information.

Unintentional disclosure of confidential information is not the only potential pitfall for legal professionals using AI. In Mata v. Avianca, an attorney’s blind trust in the answers provided by AI opened him up to uncomfortable questions from a federal judge and ultimately led to sanctions against the attorney and his firm. The issue began when an attorney was presented with a legal question in an unfamiliar area, namely what impact the automatic stay from a bankruptcy proceeding had on the statute of limitations for an injury claim on behalf of the debtor. The attorney tried traditional search engines without success, so he turned to ChatGPT, a new program he had recently heard of. He understood ChatGPT to be a “super search engine” and assumed that its responses would be similar to those of other search engines. That false assumption led to federal sanctions.

ChatGPT responded to inquiries about the legal issues with a general discussion of those issues. When asked more directly whether the bankruptcy could toll the statute of limitations, ChatGPT responded that it could. The attorney then asked for cases providing examples of that result. The large language model complied, identifying several cases standing for the proposition the attorney needed by party names and reporter citation. The attorney then used those citations to oppose a dispositive motion filed by the opposing party in a pending federal case. The only problem was that the cases were entirely fabricated by the program. The large language model responded to a request for legal authority by imitating legal citation by providing names and numbers in citation format, but those citations did not correspond to decisions handed down by judges and published by the applicable reporter. The citations appeared genuine, but the cases did not exist.

The opposing party quickly noticed the problem and pointed out that the attorney relied on non-existent authority. Still unaware of the problem, the attorney returned to ChatGPT and asked it for a copy of the decisions it previously identified. ChatGPT obliged, providing what appeared to be the decisions corresponding to the citations at issue. Like the citations, however, these decisions were wholly fabricated by the large language model. The attorney continued to trust the program’s veracity, however, and filed the ChatGPT decisions with the court.

The court called for a hearing and asked the attorney a series of highly uncomfortable questions. Although the court did not believe that the attorney sought to intentionally deceive the court through reliance on non-existent authority, the attorney’s actions fell short of the standards required in the practice of law. The judge ultimately issued a 43-page opinion discussing what happened and why it was sanctionable. The attorney was required to pay a monetary fine and send the opinion to his client and several real federal judges that ChatGPT identified as having written the fake cases. If these sanctions were not enough, the attorney’s name is now associated with having submitted fake cases in a federal case. As of August 2023, Google searches for terms such as “ChatGPT lawyer” or “ChatGPT fake cases” turns up millions of results, pages and pages of which identify the attorney and his firm by name. It may take years, if not more, before this story is not one of the first results for people searching the attorney’s name or that of his firm.

Although there were multiple points of failure for the attorney in the Mata case, the case first went off the tracks when he tried to use a tool that he did not understand to perform legal research. He believed that ChatGPT was a search engine that would identify cases corresponding to real court decisions. He did not know that the program could provide predictive text, providing what appeared to be a good answer to his prompt instead of the court decisions he actually needed. If the attorney understood that the program could generate text, rather than providing text taken from elsewhere, he might have looked more skeptically at the results and the problem might have been avoided.

Mata provides a clear lesson for legal professionals seeking to use new tools in their practice. The professional must understand how the tools work and frame requests carefully to ensure that the responses address the substance of the issue rather than simply responding to the literal text of the prompt. More importantly, the professional must use his or her professional judgment to evaluate the tool’s output. If something produced by AI seems too good to be true, the professional must confirm the accuracy of the product before incorporating the answer into work product. Although AI might provide helpful input if used correctly, an attorney who substitutes AI responses for traditional legal work product risks damage to his or her professional reputation or, more importantly, the client’s case.

 

Submit a Comment

Your email address will not be published.