Abstract
Do large language models (LLMs) know the law? These models are increasingly
being used to augment legal practice, education, and research, yet their
revolutionary potential is threatened by the presence of hallucinations --
textual output that is not consistent with legal facts. We present the first
systematic evidence of these hallucinations, documenting LLMs' varying
performance across jurisdictions, courts, time periods, and cases. Our work
makes four key contributions. First, we develop a typology of legal
hallucinations, providing a conceptual framework for future research in this
area. Second, we find that legal hallucinations are alarmingly prevalent,
occurring between 58% of the time with ChatGPT 4 and 88% with Llama 2, when
these models are asked specific, verifiable questions about random federal
court cases. Third, we illustrate that LLMs often fail to correct a user's
incorrect legal assumptions in a contra-factual question setup. Fourth, we
provide evidence that LLMs cannot always predict, or do not always know, when
they are producing legal hallucinations. Taken together, our findings caution
against the rapid and unsupervised integration of popular LLMs into legal
tasks. Even experienced lawyers must remain wary of legal hallucinations, and
the risks are highest for those who stand to benefit from LLMs the most -- pro
se litigants or those without access to traditional legal resources.