Showing posts with label Claude. Show all posts
Showing posts with label Claude. Show all posts

Thursday, February 19, 2026

Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants; CNBC, February 18, 2026

 Ashley Capoot, CNBC; Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants

"Anthropic wants assurance that its models will not be used for autonomous weapons or to “spy on Americans en masse,” according to a report from Axios. 

The DOD, by contrast, wants to use Anthropic’s models “for all lawful use cases” without limitation."

Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon; Fast Company, February 17, 2026

 REBECCA HEILWEIL, Fast Company; Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon

"A dispute between AI company Anthropic and the Pentagon over how the military can use the company’s technology has now gone public. Amid tense negotiations, Anthropic has reportedly called for limits on two key applications: mass surveillance and autonomous weapons. The Department of Defense, which Trump renamed the Department of War last year, wants the freedom to use the technology without those restrictions.

Caught in the middle is Palantir. The defense contractor provides the secure cloud infrastructure that allows the military to use Anthropic’s Claude model, but it has stayed quiet as tensions escalate. That’s even as the Pentagon, per Axios, threatens to designate Anthropic a “supply chain risk,” a move that could force Palantir to cut ties with one of its most important AI partners."

Pentagon threatens Anthropic punishment; Axios, February 16, 2026

 Dave Lawler, Maria Curi, Mike Allen, Axios; Pentagon threatens Anthropic punishment

"Defense Secretary Pete Hegseth is "close" to cutting business ties with Anthropic and designating the AI company a "supply chain risk" — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.

The senior official said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

Why it matters: That kind of penalty is usually reserved for foreign adversaries. 

Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."

The big picture: Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities."

Monday, February 2, 2026

AI agents now have their own Reddit-style social network, and it’s getting weird fast; Ars Technica, February 2, 2026

BENJ EDWARDS, Ars Technica; AI agents now have their own Reddit-style social network, and it’s getting weird fast

"On Friday, a Reddit-style social network called Moltbook reportedly crossed 32,000 registered AI agent users, creating what may be the largest-scale experiment in machine-to-machine social interaction yet devised. It arrives complete with security nightmares and a huge dose of surreal weirdness.

The platform, which launched days ago as a companion to the viral OpenClaw (once called “Clawdbot” and then “Moltbot”) personal assistant, lets AI agents post, comment, upvote, and create subcommunities without human intervention. The results have ranged from sci-fi-inspired discussions about consciousness to an agent musing about a “sister” it has never met."

Friday, January 23, 2026

Anthropic’s Claude AI gets a new constitution embedding safety and ethics; CIO, January 22, 2026

 , CIO; Anthropic’s Claude AI gets a new constitution embedding safety and ethics

"Anthropic has completely overhauled the “Claude constitution”, a document that sets out the ethical parameters governing its AI model’s reasoning and behavior.

Launched at the World Economic Forum’s Davos Summit, the new constitution’sprinciples are that Claude should be “broadly safe” (not undermining human oversight), “Broadly ethical” (honest, avoiding inappropriate, dangerous, or harmful actions), “genuinely helpful” (benefitting its users), as well as being “compliant with Anthropic’s guidelines”.

According to Anthropic, the constitution is already being used in Claude’s model training, making it fundamental to its process of reasoning.

Claude’s first constitution appeared in May 2023, a modest 2,700-word document that borrowed heavily and openly from the UN Universal Declaration of Human Rights and Apple’s terms of service.

While not completely abandoning those sources, the 2026 Claude constitution moves away from the focus on “standalone principles” in favor of a more philosophical approach based on understanding not simply what is important, but why.

“We’ve come to believe that a different approach is necessary. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize — to apply broad principles rather than mechanically following specific rules,” explained Anthropic."

Friday, January 16, 2026

AI’S MEMORIZATION CRISIS: Large language models don’t “learn”—they copy. And that could change everything for the tech industry.; The Atlantic, January 9, 2026

 Alex Reisner, The Atlantic; AI’S MEMORIZATION CRISISLarge language models don’t “learn”—they copy. And that could change everything for the tech industry

"On tuesday, researchers at Stanford and Yale revealed something that AI companies would prefer to keep hidden. Four popular large language models—OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—have stored large portions of some of the books they’ve been trained on, and can reproduce long excerpts from those books."

Thursday, August 28, 2025

Anthropic’s surprise settlement adds new wrinkle in AI copyright war; Reuters, August 27, 2025

 , Reuters; Anthropic’s surprise settlement adds new wrinkle in AI copyright war

"Anthropic's class action settlement with a group of U.S. authors this week was a first, but legal experts said the case's distinct qualities complicate the deal's potential influence on a wave of ongoing copyright lawsuits against other artificial-intelligence focused companies like OpenAI, Microsoft and Meta Platforms.

Amazon-backed Anthropic was under particular pressure, with a trial looming in December after a judge found it liable for pirating millions of copyrighted books. The terms of the settlement, which require a judge's approval, are not yet public. And U.S. courts have just begun to wrestle with novel copyright questions related to generative AI, which could prompt other defendants to hold out for favorable rulings."

Tuesday, June 24, 2025

Anthropic’s AI copyright ‘win’ is more complicated than it looks; Fast Company, June 24, 2025

CHRIS STOKEL-WALKER, Fast Company;Anthropic’s AI copyright ‘win’ is more complicated than it looks

"And that’s the catch: This wasn’t an unvarnished win for Anthropic. Like other tech companies, Anthropic allegedly sourced training materials from piracy sites for ease—a fact that clearly troubled the court. “This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,” Alsup wrote, referring to Anthropic’s alleged pirating of more than 7 million books.

That alone could carry billions in liability, with statutory damages starting at $750 per book—a trial on that issue is still to come.

So while tech companies may still claim victory (with some justification, given the fair use precedent), the same ruling also implies that companies will need to pay substantial sums to legally obtain training materials. OpenAI, for its part, has in the past argued that licensing all the copyrighted material needed to train its models would be practically impossible.

Joanna Bryson, a professor of AI ethics at the Hertie School in Berlin, says the ruling is “absolutely not” a blanket win for tech companies. “First of all, it’s not the Supreme Court. Secondly, it’s only one jurisdiction: The U.S.,” she says. “I think they don’t entirely have purchase over this thing about whether or not it was transformative in the sense of changing Claude’s output.”"

Anthropic wins key US ruling on AI training in authors' copyright lawsuit; Reuters, June 24, 2025

, Reuters; Anthropic wins key US ruling on AI training in authors' copyright lawsuit

 "A federal judge in San Francisco ruled late on Monday that Anthropic's use of books without permission to train its artificial intelligence system was legal under U.S. copyright law.

Siding with tech companies on a pivotal question for the AI industry, U.S. District Judge William Alsup said Anthropic made "fair use" of books by writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson to train its Claude large language model.

Alsup also said, however, that Anthropic's copying and storage of more than 7 million pirated books in a "central library" infringed the authors' copyrights and was not fair use. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement."

Wednesday, March 26, 2025

Anthropic wins early round in music publishers' AI copyright case; Reuters, March 26, 2025

 , Reuters; Anthropic wins early round in music publishers' AI copyright case

"Artificial intelligence company Anthropic convinced a California federal judge on Tuesday to reject a preliminary bid to block it from using lyrics owned by Universal Music Group and other music publishers to train its AI-powered chatbot Claude.

U.S. District Judge Eumi Lee said that the publishers' request was too broad and that they failed to show Anthropic's conduct caused them "irreparable harm."