Showing posts with label judges. Show all posts
Showing posts with label judges. Show all posts

Thursday, November 27, 2025

Prosecutor Used Flawed A.I. to Keep a Man in Jail, His Lawyers Say; The New York Times, November 25, 2025

 , The New York Times ; Prosecutor Used Flawed A.I. to Keep a Man in Jail, His Lawyers Say

"On Friday, the lawyers were joined by a group of 22 legal and technology scholars who warned that the unchecked use of A.I. could lead to wrongful convictions. The group, which filed its own brief with the state Supreme Court, included Barry Scheck, a co-founder of the Innocence Project, which has helped to exonerate more than 250 people; Chesa Boudin, a former district attorney of San Francisco; and Katherine Judson, executive director of the Center for Integrity in Forensic Sciences, a nonprofit that seeks to improve the reliability of criminal prosecutions.

The problem of A.I.-generated errors in legal papers has burgeoned along with the popular use of tools like ChatGPT and Gemini, which can perform a wide range of tasks, including writing emails, term papers and legal briefs. Lawyers and even judges have been caught filing court papers that were rife with fake legal references and faulty arguments, leading to embarrassment and sometimes hefty fines.

The Kjoller case, though, is one of the first in which prosecutors, whose words carry great sway with judges and juries, have been accused of using A.I. without proper safeguards...

Lawyers are not prohibited from using A.I., but they are required to ensure that their briefs, however they are written, are accurate and faithful to the law. Today’s artificial intelligence tools are known to sometimes “hallucinate,” or make things up, especially when asked complex legal questions...

Westlaw executives said that their A.I. tool does not write legal briefs, because they believe A.I. is not yet capable of the complex reasoning needed to do so...

Damien Charlotin, a senior researcher at HEC Paris, maintains a database that includes more than 590 cases from around the world in which courts and tribunals have detected hallucinated content. More than half involved people who represented themselves in court. Two-thirds of the cases were in United States courts. Only one, an Israeli case, involved A.I. use by a prosecutor."

Saturday, May 17, 2025

Anthropic’s law firm throws Claude under the bus over citation errors in court filing; The Register, May 15, 2025

 Thomas Claburn, The Register; Anthropic’s law firm throws Claude under the bus over citation errors in court filing

"An attorney defending AI firm Anthropic in a copyright case brought by music publishers apologized to the court on Thursday for citation errors that slipped into a filing after using the biz's own AI tool, Claude, to format references.

The incident reinforces what's becoming a pattern in legal tech: while AI models can be fine-tuned, people keep failing to verify the chatbot's output, despite the consequences.

The flawed citations, or "hallucinations," appeared in an April 30, 2025 declaration [PDF] from Anthropic data scientist Olivia Chen in a copyright lawsuit music publishers filed in October 2023.

But Chen was not responsible for introducing the errors, which appeared in footnotes 2 and 3.

Ivana Dukanovic, an attorney with Latham & Watkins, the firm defending Anthropic, stated that after a colleague located a supporting source for Chen's testimony via Google search, she used Anthropic's Claude model to generate a formatted legal citation. Chen and defense lawyers failed to catch the errors in subsequent proofreading.

"After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article," explained Dukanovic in her May 15, 2025 declaration [PDF].

"Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors.

"Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai."...

The hallucinations of AI models keep showing up in court filings.

Last week, in a plaintiff's claim against insurance firm State Farm (Jacquelyn Jackie Lacey v. State Farm General Insurance Company et al), former Judge Michael R. Wilner, the Special Master appointed to handle the dispute, sanctioned [PDF] the plaintiff's attorneys for misleading him with AI-generated text. He directed the plaintiff's legal team to pay more than $30,000 in court costs that they wouldn't have otherwise had to bear.

After reviewing a supplemental brief filed by the plaintiffs, Wilner found that "approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way."

Two of the citations, he said, do not exist, and several cited phony judicial opinions."

Wednesday, May 7, 2025

AI of dead Arizona road rage victim addresses killer in court; The Guardian, May 6, 2025

 Cy Neff, The Guardian; AI of dead Arizona road rage victim addresses killer in court

"Pelkey’s appearance from beyond the grave was made possible by artificial intelligence in what could be the first use of AI to deliver a victim impact statement. Stacey Wales, Pelkey’s sister, told local outlet ABC-15 that she had a recurring thought when gathering more than 40 impact statements from Chris’s family and friends.

“All I kept coming back to was, what would Chris say?” Wales said.

As AI spreads across society and enters the courtroom, the US judicial conference advisory committee has announced that it will begin seeking public comment as part of determining how to regulate the use of AI-generated evidence at trial."

Thursday, November 28, 2024

Fate of AI and Fair Use Copyright Report Will Depend on Judges; Bloomberg Law, November 26, 2024

Jorja Siemons , Bloomberg Law; Fate of AI and Fair Use Copyright Report Will Depend on Judges

"Federal courts—not the US Copyright Office—will have the final say on implementing any recommendations from the office’s forthcoming report on the fair use ramifications of training AI on protected works...

Judges across the country are presiding over roughly three dozen lawsuits against OpenAI Inc.Microsoft Corp.Meta Platforms Inc., Anthropic PBC, and other AI companies. How they apply that framework as they wrestle with novel legal questions remains to be seen—and it may be Congress that benefits the most from the Copyright Office’s analysis."

Sunday, July 23, 2023

How judges, not politicians, could dictate America’s AI rules; MIT Technology Review, July 17, 2023

Melissa Heikkilä, MIT Technology Review; How judges, not politicians, could dictate America’s AI rules

"It’s becoming increasingly clear that courts, not politicians, will be the first to determine the limits on how AI is developed and used in the US."