Showing posts with label Anthropic. Show all posts
Showing posts with label Anthropic. Show all posts

Friday, April 24, 2026

DeepSeek’s Sequel Set to Extend China’s Reach in Open-Source A.I.; The New York Times, April 24, 2026

Meaghan Tobin and , The New York Times; DeepSeek’s Sequel Set to Extend China’s Reach in Open-Source A.I.

"DeepSeek released its models as open source, which means others can freely use and modify them. By contrast, OpenAI and Anthropic kept their leading models proprietary. The episode demonstrated that an open-source system could perform almost as well as closed versions. In the months that followed, Chinese firms released dozens of other open-source models. By the end of 2025, these models made up a significant share of global A.I. usage.

On Friday, DeepSeek released a preview of V4, its long-awaited follow-up model, which it intends to open source. The new model excels at writing computer code, an increasingly important skill for leading A.I. systems. It significantly outperformed every other open-source system at generating code, according to tests from Vals AI, a company that tracks the performance of A.I. technologies.

DeepSeek released its new model just days after Moonshot AI, another Chinese start-up, introduced its latest open-source model, Kimi 2.6. While these systems trail the coding capabilities of the leading U.S. models from Anthropic and OpenAI, the gap is narrowing.

The implications are meaningful. Using A.I. to write code is faster and frees up human programmers to focus on bigger issues. It also means people can use DeepSeek’s latest release to power A.I. agents, which are personal digital assistants that can use other software applications on behalf of office workers, including spreadsheets, online calendars and email services."

Thursday, April 23, 2026

Anthropic seeks pivotal court win in music publisher lawsuit over AI training; Reuters, April 21, 2026

, Reuters; Anthropic seeks pivotal court win in music publisher lawsuit over AI training

"Artificial intelligence company Anthropic has asked a California federal court to ​rule in its favor in a copyright lawsuit brought by music publishers Universal Music Group, Concord ‌and ABKCO, arguing it made "fair use" of their song lyrics to train its AI-powered chatbot Claude.

Anthropic's Monday filing addresses the key question for a wave of high-stakes copyright cases brought by creators against tech companies: is it legally permissible to copy millions of copyrighted works ​without permission to train AI models?...

The lawsuit ​is one of dozens of disputes between copyright owners such as authors and news outlets, and tech giants ​including OpenAI, Microsoft and Meta Platforms over the training of their AI systems. Amazon- and Google-backed Anthropic was the first major AI ‌company ⁠to settle one of the cases, agreeing last yearto pay a group of authors $1.5 billion to resolve a class-action lawsuit."

Wednesday, April 22, 2026

Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era; The New York Times, April 22, 2026

 , The New York Times; Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era

Artificial intelligence tools are making it faster than ever to reproduce creative work. Does copyright even matter anymore?

"Sigrid Jin was waiting to board a plane when he saw stunning news that artificial intelligence start-up Anthropic had accidentally leaked the source code for Claude Code, its popular A.I. agent. Mr. Jin, 25, an undergraduate student, scrambled to post a copy online. His worried girlfriend quickly texted him: Was he violating copyright law?

Mr. Jin turned to a team of A.I. assistants for a solution. He directed them to rewrite the leaked code in another programming language, then shared that version online. Within hours, more than 100,000 people had liked or linked to it.

Anthropic, one of the leading A.I. companies alongside OpenAI, has said the leak had been caused by human error and, citing copyright violations, demanded that GitHub, an online library of computer code, remove posts sharing the code. Thousands of posts were taken down. But Mr. Jin’s version remains online. He said Anthropic had not asked him to take it down.

It is unclear whether Anthropic, which did not respond to questions from The New York Times, is drawing a distinction with the rewritten code. Mr. Jin said he believed rewriting the code transformed it into a new work, one that Anthropic could not claim ownership over.

He said he was driven less by money or fame than by a desire to make a broader philosophical point. What is the value of copyrighted intellectual property in an era when A.I. can easily replicate not just computer code but art, music and literature in minutes?

“I just wanted to raise some ethical questions in the A.I. agent era,” he said. “Any creative work can be reproduced in a second.”"

Anthropic Wants Claude to Be Moral. Is Religion Really the Answer?; The New York Times, April 20, 2026

 David DeSteno, The New York Times; Anthropic Wants Claude to Be Moral. Is Religion Really the Answer?

"In a public statement of its intentions for its Claude chatbot, the artificial intelligence company Anthropic has said that it wants Claude to be “a genuinely good, wise and virtuous agent.” The company raised the moral stakes this month, when it announced that its latest A.I. model, Claude Mythos Preview, poses too great a cybersecurity threat to be widely released. Behind the scenes, Anthropic has been trying to shore up the ethical foundations of its products, working with a Catholic priest and consulting with other prominent Christians to help foster Claude’s moral and spiritual development.

Anthropic’s intentions are admirable, but the project of drawing on religion to cultivate the ethical behavior of Claude (or any other chatbot) is likely to fail. Not because there isn’t moral wisdom in Scripture, sermons and theological treatises — texts that Claude has undoubtedly already scraped from the web and integrated — but because Claude is missing a crucial mechanism by which religion fosters moral growth: a body."

Tuesday, April 21, 2026

Anthropic Settlement Hearing Comes into Focus; Publishers Weekly, April 20, 2026

Jim Milliot , Publishers Weekly ; Anthropic Settlement Hearing Comes into Focus

"With the May 14 Bartz v. Anthropic settlement fairness hearing drawing closer, both the Authors Guild and Authors Alliance have issued updates on where the $1.5 billion copyright infringement agreement stands....

The Guild noted that with the higher claim rate, the payout per work will be closer to the $3,000 per work estimated in the lawsuit rather than the $4,876 payout that was based on the number of works claimed in March...

The Authors Alliance update focused on the various objections that have been made about the settlement and which are likely to be raised in the settlement hearing. The objections were unsealed following a motion filed by professor Lea Victoria Bishop.

Among the objections are the claims that the distribution plan systematically favors publishers over authors; that the class notice was “misleading/coercive,” since statutory damages per infringement can technically be up to $150,000/work which would make the settlement amount per work is inadequate; and that the settlement sets a “dangerous precedent” by permitting “a multi-billion dollar AI company to ‘buy’ its way out of massive piracy for a ‘discounted’ rate.”

Judge Martínez-Olguín, who took over the case following the retirement of Judge William Alsup, will oversee the May 14 hearing set for 2 p.m. in the San Francisco Federal Courthouse. A Zoom link will be available for those who cannot make the trip to San Francisco."

Sunday, April 19, 2026

The philosopher trying to teach ethics to AI developers; NPR, April 17, 2026

 , NPR ; The philosopher trying to teach ethics to AI developers

Thousands of authors seek share of Anthropic copyright settlement; Reuters, April 17, 2026

 , Reuters; Thousands of authors seek share of Anthropic copyright settlement

"Nearly 120,000 authors and other copyright holders are seeking a share of a $1.5 billion class-action settlement with Anthropic over the company's unauthorized use of their books in artificial-intelligence training, according to a ​filing in California federal court.

Claims have been filed for 91% of the more than 480,000 ‌works covered by the settlement, according to a court filing  in the case on Thursday.

A judge will consider whether to grant final approval to the settlement – the largest ever in a U.S. copyright case – at a hearing next month.

Anthropic was the first and ​remains the only major AI company to settle a U.S. class-action by copyright holders alleging AI ​platforms used their work without permission to train their systems."

Friday, April 17, 2026

AI Startups Have These Copyright Lawyers on Speed Dial; Bloomberg Law, April 16, 2026

 David Schultz , Bloomberg Law; AI Startups Have These Copyright Lawyers on Speed Dial

"Something similar connects many of the top attorneys representing the artificial intelligence industry in its most consequential battles: their resumes.

The common thread is Durie Tangri. More than 50 attorneys from the defunct Bay Area intellectual property firm are at the center of epic Silicon Valley copyright fights, just more than three years after Morrison Foerster acquired the practice...

“Tech copyright is a small world,” said Joseph Gratz, one of the alums at Morrison.

The Durie Tangri alums have benefited from the demand in tech copyright law, said Gratz, who has appeared in court defending OpenAI in almost two dozen federal lawsuits...

One of the marquee cases Durie Tangri took on was the decade-long copyright infringement suit over Google’s book digitization. Sonal Mehta, a Durie Tangri alum who is now at WilmerHale, said the boutique relished taking on matters that ventured into uncharted territory.

“We weren’t afraid to be operating in gray areas or to be looking at where the law hadn’t fully developed,” Mehta said. “We didn’t need to feel like every argument had to be something that was a cookie cutter argument that had already been made and won 20 times before.”"

Sunday, April 12, 2026

The most 'ethical' AI company might also be the web's biggest freeloader; Business Insider, April 12, 2026

 , Business Insider ; The most 'ethical' AI company might also be the web's biggest freeloader

"Cloudflare's latest data offers one of the clearest snapshots yet of how AI companies consume the web, and how little they give back.

The company, which powers roughly 20% of the internet, tracks how AI bots crawl websites versus how often those platforms send users back through referrals. The resulting "crawl-to-refer" ratio is a simple yet telling metric: how much value is extracted compared to returned.

The early April 2026 figures are stark. Anthropic is the worst by a wide margin, with a ratio of 8,800 to 1. That means its bots crawl webpages 8,800 times for every referral sent...

Anthropic's position is particularly striking given its reputation for being "ethical." That reputation has made it a preferred choice among some users who want to support more responsible AI development. This data highlights a different dimension of ethics — how companies interact with the broader web ecosystem that provides information for AI model outputs."

Saturday, April 11, 2026

Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders.; The Washington Post, April 11, 2026

 

, The Washington Post ; Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders.

The artificial intelligence company asked religious leaders for guidance on building a moral chatbot.


"The company hosted about 15 Christian leaders from Catholic and Protestant churches, academia and the business world at its headquarters in late March for a two-day summit that included discussion sessions and a private dinner with senior Anthropic researchers, according to four participants who spoke with The Washington Post.


Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”"

How AI is getting better at finding security holes; NPR, April 11, 2026

, NPR; How AI is getting better at finding security holes

"In the past few months, AI models have gone from producing hallucinations to becoming effective at finding security flaws in software, according to developers who maintain widely used cyber infrastructure. Those pieces of software, among other things, power operating systems and transfer data for things connected to the internet.

While these new capabilities can help developers make software more secure, they can also be weaponized by hackers and nation states to steal information and money or disrupt critical services.

The latest development of AI's cyber capability came on Tuesday, when AI lab Anthropic announced it had developed a powerful new model the company believes could "reshape cybersecurity." It said that its latest model, Mythos Preview, was able to find "high-severity vulnerabilities, including some in every major operating system and web browser." Not only that, the model was better at coming up with ways to exploit the vulnerabilities it found, which means malicious actors can more effectively achieve their goals.

For now, the company is limiting the access to the model to around 50 select companies and organizations "in an effort to secure the world's most critical software." They're calling the collaboration Project Glasswing, naming it after a butterfly species with transparent wings.

Anthropic says the risk for misuse is so high that it has no plans to release this particular model to the general public, according to the announcement, but it will release other related models. "Our eventual goal is to enable our users to safely deploy Mythos-class models at scale," the company wrote."

Wednesday, April 8, 2026

Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions; CNBC, April 8, 2026

Jonathan Vanian, CNBC; Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions

"Meta is debuting its first major artificial intelligence model since the costly hiring of Scale AI’s Alexandr Wang nine months ago, as the Facebook parent aims to carve out a niche in a market that’s being dominated by OpenAI, Anthropic and Google.

Dubbed Muse Spark and originally codenamed Avocado, the AI model announced Wednesday is the first from the company’s new Muse series developed by Meta Superintelligence Labs, the AI unit that Wang oversees. Wang joined Meta in June as part of the company’s $14.3 billion investment in Scale AI, where he was CEO."

Monday, April 6, 2026

US music publishers suing Anthropic make their case against AI 'fair use'; Reuters, March 24, 2026

  , Reuters; US music publishers suing Anthropic make their case against AI 'fair use'

"Music publishers Universal Music Group , Concord and ABKCO have asked a judge in California to rule that U.S. copyright law does not insulate artificial intelligence startup Anthropic from ​liability for copying their song lyrics to train its AI-powered chatbot Claude.

The publishers' request , filed on Monday ‌in federal court in San Jose, tees up a critical question in the legal battle between creators and tech companies: Does the doctrine of "fair use" apply to the copying of millions of copyrighted works to train AI models?"

Anthropic Suddenly Cares Intensely About Intellectual Property After Realizing With Horror That It Accidentally Leaked Claude’s Source Code; Futurism, April 3, 2026

 , Futurism; Anthropic Suddenly Cares Intensely About Intellectual Property After Realizing With Horror That It Accidentally Leaked Claude’s Source Code

As the Wall Street Journal reports, Anthropic is scrambling to contain a leak of its Claude Code AI model’s source code by issuing a copyright takedown request for more than 8,000 copies of it — a gallingly ironic stance for the company to be taking, considering how it trained its models in the first place.

The leak isn’t considered to be an outright disaster; no customer data was exposed, Anthropic says, nor were the internal mathematical “weights” that determine how the AI “learns” and which distinguish it from other models. But it did expose the techniques its engineers used to get its AI model to act as an autonomous agent, a form of digital infrastructure coders call a harness, and other tricks for making the AI operate as seamlessly as it does.

Hence Anthropic’s copyright takedown request, which targets the thousands of copies that were shared on GitHub. It later narrowed its request from 8,000 copies to 96 copies, according to the WSJ reporting, claiming that the initial one covered more accounts than intended.

It’s certainly within Anthropic’s right to issue the takedown request, but the hypocrisy of Anthropic running to the law to protect its intellectual property is plain to see, especially for a company that’s relentlessly positioned itself as the ethical adult in the room."

Sunday, April 5, 2026

Claude's Constitution; Anthropic, January 21, 2026

 Anthropic, Claude's Constitution

Our vision for Claude's character

"Claude’s constitution is a detailed description of Anthropic’s intentions for Claude’s values and behavior. It plays a crucial role in our training process, and its content directly shapes Claude’s behavior. It’s also the final authority on our vision for Claude, and our aim is for all of our other guidance and training to be consistent with it.

Training models is a difficult task, and Claude’s behavior might not always reflect the constitution’s ideals. We will be open—for example, in our system cards—about the ways in which Claude’s behavior comes apart from our intentions. But we think transparency about those intentions is important regardless.

The document is written with Claude as its primary audience, so it might read differently than you’d expect. For example, it’s optimized for precision over accessibility, and it covers various topics that may be of less interest to human readers. We also discuss Claude in terms normally reserved for humans (e.g., “virtue,” “wisdom”). We do this because we expect Claude’s reasoning to draw on human concepts by default, given the role of human text in Claude’s training; and we think encouraging Claude to embrace certain human-like qualities may be actively desirable.

This constitution is written for our mainline, general-access Claude models. We have some models built for specialized uses that don’t fully fit this constitution; as we continue to develop products for specialized use cases, we will continue to evaluate how to best ensure our models meet the core objectives outlined in this constitution.

For a summary of the constitution, and for more discussion of how we’re thinking about it, see our blog post “Claude’s new constitution.”

Powerful AI models will be a new kind of force in the world, and people creating them have a chance to help them embody the best in humanity. We hope this constitution is a step in that direction.

We’re releasing Claude’s constitution in full under a Creative Commons CC0 1.0 Deed, meaning it can be freely used by anyone for any purpose without asking for permission.

Many people at Anthropic and beyond contributed to the creation of this document, as did several Claude models. Amanda Askell is the primary author and wrote the majority of the text. Joe Carlsmith wrote significant parts of many sections and played a core role in revising the text. Chris Olah, Jared Kaplan, and Holden Karnofsky made significant contributions to its content and development. More detailed contribution statement and acknowledgments below.

The preface and the acknowledgements are not part of the official constitution."

The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code; Observer, March 31, 2026

 , Observer; The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code

"Father Brendan McGuire is writing a novel about a disenchanted monk and his A.I. companion. He’s doing it with Claude. That detail—a Catholic priest using Anthropic’s chatbot to explore questions of faith and artificial consciousness—tells you something about where Silicon Valley’s moral reckoning has arrived. McGuire, 60, leads St. Simon Catholic Parish in Los Altos, Calif., a congregation that counts some of the Valley’s A.I. researchers among its members. Earlier this year, he and a group of faith leaders helped Anthropic shape the Claude Constitution, the set of guiding principles governing how its A.I. behaves.

He is not, in other words, an outside critic. He is something more complicated: a true believer in both God and technology, trying to hold them in the same hand. “I left the tech industry, but it never really left me,” McGuire told Observer...

McGuire wasn’t Anthropic’s only religious collaborator. Bishop Paul Tighe of the Vatican’s Dicastery for Culture and Education and Brian Patrick Green, a technology ethics director at Santa Clara University, also reviewed the Claude Constitution. Green and other Catholic scholars recently filed a federal court brief supporting Anthropic in its lawsuit against the U.S. government, which challenges the company’s effective blacklisting by the Pentagon after it refused to allow its A.I. systems to be used for autonomous warfare or domestic surveillance. The brief praised those ethical limits as “minimal standards of ethical conduct for technical progress.”...

Anthropic says its engagement with religious voices—part of a broader effort to engage a wide variety of communities to keep pace with technological acceleration—is only a beginning. The company plans to expand outreach beyond Catholic institutions to other religious leaders going forward."

Saturday, April 4, 2026

AI agents are scrambling power users' brains; Axios, April 4, 2026

 Megan Morrone, Axios ; AI agents are scrambling power users' brains

"A growing number of software developers say AI coding tools are frying their brains. 

The big picture: The most popular agentic AI systems have triggered something that looks a lot like addiction among some of tech's highest performers."

Thursday, April 2, 2026

Anthropic boss makes big call on Australian copyright as artists say pay up; Australian Broadcasting Corporation, April 1, 2026

 Clare Armstrong , Australian Broadcasting Corporation; Anthropic boss makes big call on Australian copyright as artists say pay up

"In short:

Anthropic CEO Dario Amodei has told a Canberra forum AI is moving faster than any technological change before it.

Mr Amodei says he is not trying to change Australia's mind on copyright, is worried about AI in the hands of autocratic countries, and feels a tax on profits is inevitable.

What's next?

The $555 billion company behind AI program Claude is facing pushback from artists over the use of copyrighted material to train its technology."

Wednesday, April 1, 2026

Anthropic Races to Contain Leak of Code Behind Claude AI Agent; The Wall Street Journal, April 1, 2026

  Sam Schechner, The Wall Street Journal; Anthropic Races to Contain Leak of Code Behind Claude AI Agent

Developer issues copyright takedown request in bid to prevent competitors from cloning coding tool’s features

"Anthropic is racing to contain the fallout after accidentally exposing the underlying instructions it uses to direct Claude Code, the popular artificial-intelligence agent app that has won the company an edge with developers and businesses.

By Wednesday morning, Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions—known as source code—that developers had shared on programming platform GitHub."

Monday, March 30, 2026

Judge Blocks Pentagon Move Against Anthropic in AI Ethics Dispute; National Catholic Register, March 30, 2026

 Jonah McKeown , National Catholic Register; Judge Blocks Pentagon Move Against Anthropic in AI Ethics Dispute

"A federal judge has temporarily blocked the Department of Defense from labeling American artificial intelligence (AI) company Anthropic a “supply chain risk,” a designation the Pentagon gave the company after Anthropic refused to allow the military to use its products for autonomous weaponry and mass surveillance.

The case has drawn interest from prominent Catholics due to the relative novelty of a major AI developer taking a stand in favor of ethical and socially responsible safeguards around the technology in the face of government coercion.

In a March 26 ruling, which is not a final decision in the case, Judge Rita Lin of the U.S. District Court for the Northern District of California said Anthropic has a high likelihood of ultimately winning its case and proving that the government’s “supply chain risk” designation violated, among other laws, the First and Fifth Amendments."