Showing posts with label AI existential risks. Show all posts
Showing posts with label AI existential risks. Show all posts

Monday, January 26, 2026

Behind the Curtain: Anthropic's warning to the world; Axios, January 26, 2026

Jim VandeHei, Mike Allen, Axios; Behind the Curtain: Anthropic's warning to the world

"Anthropic CEO Dario Amodei, the architect of the most powerful and popular AI system for global business, is warning of the imminent "real danger" that super-human intelligence will cause civilization-level damage absent smart, speedy intervention.

  • In a 38-page essay, shared with us in advance of Monday's publication, Amodei writes: "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species."

  • "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it."

Why it matters: Amodei's company has built among the most advanced LLM systems in the world. 


  • Anthropic's new Claude Opus 4.5 and coding and Cowork tools are the talk of Silicon Valley and America's C-suites. 

  • AI is doing 90% of the computer programming to build Anthropic's products, including its own AI.

Amodeione of the most vocal moguls about AI risk, worries deeply that government, tech companies and the public are vastly underestimating what could go wrong. His memo — a sequel to his famous 2024 essay, "Machines of Loving Grace: How AI Could Transform the World for the Better" — was written to jar others, provoke a public debate and detail the risks.


  • Amodei insists he's optimistic that humans will navigate this transition — but only if AI leaders and government are candid with people and take the threats more seriously than they do today.

Amodei's concerns flow from his strong belief that within a year or two, we will face the stark reality of what he calls a "country of geniuses in a datacenter.""

Wednesday, December 3, 2025

‘The biggest decision yet’; The Guardian, December 2, 2025

  , The Guardian; ‘The biggest decision yet’

"Humanity will have to decide by 2030 whether to take the “ultimate risk” of letting artificial intelligence systems train themselves to become more powerful, one of the world’s leading AI scientists has said.

Jared Kaplan, the chief scientist and co-owner of the $180bn (£135bn) US startup Anthropic, said a choice was looming about how much autonomy the systems should be given to evolve.

The move could trigger a beneficial “intelligence explosion” – or be the moment humans end up losing control...

He is not alone at Anthropic in voicing concerns. One of his co-founders, Jack Clark, said in October he was both an optimist and “deeply afraid” about the trajectory of AI, which he called “a real and mysterious creature, not a simple and predictable machine”.

Kaplan said he was very optimistic about the alignment of AI systems with the interests of humanity up to the level of human intelligence, but was concerned about the consequences if and when they exceed that threshold."

Monday, September 22, 2025

If Anyone Builds it, Everyone Dies review – how AI could kill us all; The Guardian; September 22, 2025

 , The Guardian; If Anyone Builds it, Everyone Dies review – how AI could kill us all

"“History,” they write, “is full of … examples of catastrophic risk being minimised and ignored,” from leaded petrol to Chornobyl. But what about predictions of catastrophic risk being proved wrong? History is full of those, too, from Malthus’s population apocalypse to Y2K. Yudkowsky himself once claimed that nanotechnology would destroy humanity “no later than 2010”.

The problem is that you can be overconfident, inconsistent, a serial doom-monger, and still be right. It’s important to be aware of our own motivated reasoning when considering the arguments presented here; we have every incentive to disbelieve them.

And while it’s true that they don’t represent the scientific consensus, this is a rapidly changing, poorly understood field. What constitutes intelligence, what constitutes “super”, whether intelligence alone is enough to ensure world domination – all of this is furiously debated.

At the same time, the consensus that does exist is not particularly reassuring. In a 2024 survey of 2,778 AI researchers, the median probability placed on “extremely bad outcomes, such as human extinction” was 5%. Worryingly, “having thought more (either ‘a lot’ or ‘a great deal’) about the question was associated with a median of 9%, while having thought ‘little’ or ‘very little’ was associated with a median of 5%”.

Yudkowsky has been thinking about the problem for most of his adult life. The fact that his prediction sits north of 99% might reflect a kind of hysterical monomania, or an especially thorough engagement with the problem. Whatever the case, it feels like everyone with an interest in the future has a duty to read what he and Soares have to say."

Saturday, September 13, 2025

A.I.’s Prophet of Doom Wants to Shut It All Down; The New York Times, September 12, 2025

 , The New York Times; A.I.’s Prophet of Doom Wants to Shut It All Down

"The first time I met Eliezer Yudkowsky, he said there was a 99.5 percent chance that A.I. was going to kill me.

I didn’t take it personally. Mr. Yudkowsky, 46, is the founder of the Machine Intelligence Research Institute, a Berkeley-based nonprofit that studies risks from advanced artificial intelligence.

For the last two decades, he has been Silicon Valley’s version of a doomsday preacher — telling anyone who will listen that building powerful A.I. systems is a terrible idea, one that will end in disaster.

That is also the message of Mr. Yudkowsky’s new book, “If Anyone Builds It, Everyone Dies.” The book, co-written with MIRI’s president, Nate Soares, is a distilled, mass-market version of the case they have been making to A.I. insiders for years.

Their goal is to stop the development of A.I. — and the stakes, they say, are existential...

And what about the good things that A.I. can do? Wouldn’t shutting down A.I. development also mean delaying cures for diseases, A.I. tutors for students and other benefits?

“We totally acknowledge the good effects,” he replied. “Yep, these things could be great tutors. Yep, these things sure could be useful in drug discovery. Is that worth exterminating all life on Earth? No.”"