Showing posts with label Anthropic. Show all posts
Showing posts with label Anthropic. Show all posts

Saturday, May 17, 2025

Anthropic’s law firm throws Claude under the bus over citation errors in court filing; The Register, May 15, 2025

 Thomas Claburn, The Register; Anthropic’s law firm throws Claude under the bus over citation errors in court filing

"An attorney defending AI firm Anthropic in a copyright case brought by music publishers apologized to the court on Thursday for citation errors that slipped into a filing after using the biz's own AI tool, Claude, to format references.

The incident reinforces what's becoming a pattern in legal tech: while AI models can be fine-tuned, people keep failing to verify the chatbot's output, despite the consequences.

The flawed citations, or "hallucinations," appeared in an April 30, 2025 declaration [PDF] from Anthropic data scientist Olivia Chen in a copyright lawsuit music publishers filed in October 2023.

But Chen was not responsible for introducing the errors, which appeared in footnotes 2 and 3.

Ivana Dukanovic, an attorney with Latham & Watkins, the firm defending Anthropic, stated that after a colleague located a supporting source for Chen's testimony via Google search, she used Anthropic's Claude model to generate a formatted legal citation. Chen and defense lawyers failed to catch the errors in subsequent proofreading.

"After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article," explained Dukanovic in her May 15, 2025 declaration [PDF].

"Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors.

"Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai."...

The hallucinations of AI models keep showing up in court filings.

Last week, in a plaintiff's claim against insurance firm State Farm (Jacquelyn Jackie Lacey v. State Farm General Insurance Company et al), former Judge Michael R. Wilner, the Special Master appointed to handle the dispute, sanctioned [PDF] the plaintiff's attorneys for misleading him with AI-generated text. He directed the plaintiff's legal team to pay more than $30,000 in court costs that they wouldn't have otherwise had to bear.

After reviewing a supplemental brief filed by the plaintiffs, Wilner found that "approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way."

Two of the citations, he said, do not exist, and several cited phony judicial opinions."

Thursday, May 15, 2025

Anthropic expert accused of using AI-fabricated source in copyright case; Reuters, May 13, 2025

  , Reuters; Anthropic expert accused of using AI-fabricated source in copyright case

"Van Keulen asked Anthropic to respond by Thursday to the accusation, which the company said appeared to be an inadvertent citation error. He rejected the music companies' request to immediately question the expert but said the allegation presented "a very serious and grave issue," and that there was "a world of difference between a missed citation and a hallucination generated by AI.""

Wednesday, March 26, 2025

Anthropic wins early round in music publishers' AI copyright case; Reuters, March 26, 2025

 , Reuters; Anthropic wins early round in music publishers' AI copyright case

"Artificial intelligence company Anthropic convinced a California federal judge on Tuesday to reject a preliminary bid to block it from using lyrics owned by Universal Music Group and other music publishers to train its AI-powered chatbot Claude.

U.S. District Judge Eumi Lee said that the publishers' request was too broad and that they failed to show Anthropic's conduct caused them "irreparable harm."

Tuesday, December 31, 2024

Anthropic Agrees to Enforce Copyright Guardrails on New AI Tools; Bloomberg Law, December 30, 2024

 Annelise Levy, Bloomberg Law; Anthropic Agrees to Enforce Copyright Guardrails on New AI Tools

"Anthropic PBC must apply guardrails to prevent its future AI tools from producing infringing copyrighted content, according to a Monday agreement reached with music publishers suing the company for infringing protected song lyrics. 

Eight music publishers—including Universal Music Corp. and Concord Music Group—and Anthropic filed a stipulation partly resolving the publishers’ preliminary injunction motion in the US District Court for the Northern District of California. The publishers’ request that Anthropic refrain from using unauthorized copies of lyrics to train future AI models remains pending."

Tuesday, August 20, 2024

Authors sue Claude AI chatbot creator Anthropic for copyright infringement; AP, August 19, 2024

MATT O’BRIEN, AP; Authors sue Claude AI chatbot creator Anthropic for copyright infringement

"A group of authors is suing artificial intelligence startup Anthropic, alleging it committed “large-scale theft” in training its popular chatbot Claude on pirated copies of copyrighted books.

While similar lawsuits have piled up for more than a year against competitor OpenAI, maker of ChatGPT, this is the first from writers to target Anthropic and its Claude chatbot.

The smaller San Francisco-based company — founded by ex-OpenAI leaders — has marketed itself as the more responsible and safety-focused developer of generative AI models that can compose emails, summarize documents and interact with people in a natural way...

The lawsuit was brought by a trio of writers — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — who are seeking to represent a class of similarly situated authors of fiction and nonfiction...

What links all the cases is the claim that tech companies ingested huge troves of human writings to train AI chatbots to produce human-like passages of text, without getting permission or compensating the people who wrote the original works. The legal challenges are coming not just from writers but visual artistsmusic labels and other creators who allege that generative AI profits have been built on misappropriation...

But the lawsuit against Anthropic accuses it of using a dataset called The Pile that included a trove of pirated books. It also disputes the idea that AI systems are learning the way humans do."

Friday, October 20, 2023

Music publishers sue Amazon-backed AI company over song lyrics; The Guardian, October 19, 2023

  and agencies, The Guardian; Music publishers sue Amazon-backed AI company over song lyrics

"Music publishers Universal Music, ABKCO and Concord Publishing sued the artificial intelligence company Anthropic in Tennessee federal court on Wednesday, accusing it of misusing “innumerable” copyrighted song lyrics to train its chatbot Claude.

The lawsuit said Anthropic violates the publishers’ rights through its use of lyrics from at least 500 songs ranging from the Beach Boys’ God Only Knows and the Rolling Stones’ Gimme Shelter to Mark Ronson and Bruno Mars’ Uptown Funk and Beyoncé’s Halo.

The lawsuit accused Anthropic of infringing the publishers’ copyrights by copying their lyrics without permission as part of the “massive amounts of text” that it scrapes from the internet to train Claude to respond to human prompts."