Showing posts with label accuracy of information. Show all posts
Showing posts with label accuracy of information. Show all posts

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

 Varun Magesh∗ Stanford University; Mirac Suzgun, Stanford University; Faiz Surani∗ Stanford University; Christopher D. Manning, Stanford University; Matthew Dahl, Yale University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

Thursday, January 9, 2020

Free Textbooks for Law Students; Inside Higher Ed, January 3, 2020

Lindsay McKenzie, Inside Higher Ed; Free Textbooks for Law Students

"Law school is notoriously expensive, but a growing number of professors are pushing back on the idea that law textbooks must be expensive, too. Faculty members at the New York University School of Law have taken matters into their own hands by publishing their own textbooks at no cost to students."

Wednesday, December 5, 2018

Supreme Court hands Fox News another win in copyright case against TVEyes monitoring service; The Washington Post, December 3, 2018

Erik Wemple, The Washington Post; Supreme Court hands Fox News another win in copyright case against TVEyes monitoring service

"The Supreme Court’s decision not to hear the case could leave media critics scrambling. How to fact-check the latest gaffe on “Hannity”? Did Brian Kilmeade really say that? To be sure, cable-news watchers commonly post the most extravagant cable-news moments on Twitter and other social media — a democratic activity that lies outside of the TVEyes ruling, because it’s not a money-making thing. Yet Fox News watchdogs use TVEyes and other services to soak in the full context surrounding those widely circulated clips, and that task is due to get more complicated. That said, services may still provide transcripts without infringing the Fox News copyright."