Showing posts with label AI existential risks. Show all posts
Showing posts with label AI existential risks. Show all posts

Monday, September 22, 2025

If Anyone Builds it, Everyone Dies review – how AI could kill us all; The Guardian; September 22, 2025

 , The Guardian; If Anyone Builds it, Everyone Dies review – how AI could kill us all

"“History,” they write, “is full of … examples of catastrophic risk being minimised and ignored,” from leaded petrol to Chornobyl. But what about predictions of catastrophic risk being proved wrong? History is full of those, too, from Malthus’s population apocalypse to Y2K. Yudkowsky himself once claimed that nanotechnology would destroy humanity “no later than 2010”.

The problem is that you can be overconfident, inconsistent, a serial doom-monger, and still be right. It’s important to be aware of our own motivated reasoning when considering the arguments presented here; we have every incentive to disbelieve them.

And while it’s true that they don’t represent the scientific consensus, this is a rapidly changing, poorly understood field. What constitutes intelligence, what constitutes “super”, whether intelligence alone is enough to ensure world domination – all of this is furiously debated.

At the same time, the consensus that does exist is not particularly reassuring. In a 2024 survey of 2,778 AI researchers, the median probability placed on “extremely bad outcomes, such as human extinction” was 5%. Worryingly, “having thought more (either ‘a lot’ or ‘a great deal’) about the question was associated with a median of 9%, while having thought ‘little’ or ‘very little’ was associated with a median of 5%”.

Yudkowsky has been thinking about the problem for most of his adult life. The fact that his prediction sits north of 99% might reflect a kind of hysterical monomania, or an especially thorough engagement with the problem. Whatever the case, it feels like everyone with an interest in the future has a duty to read what he and Soares have to say."

Saturday, September 13, 2025

A.I.’s Prophet of Doom Wants to Shut It All Down; The New York Times, September 12, 2025

 , The New York Times; A.I.’s Prophet of Doom Wants to Shut It All Down

"The first time I met Eliezer Yudkowsky, he said there was a 99.5 percent chance that A.I. was going to kill me.

I didn’t take it personally. Mr. Yudkowsky, 46, is the founder of the Machine Intelligence Research Institute, a Berkeley-based nonprofit that studies risks from advanced artificial intelligence.

For the last two decades, he has been Silicon Valley’s version of a doomsday preacher — telling anyone who will listen that building powerful A.I. systems is a terrible idea, one that will end in disaster.

That is also the message of Mr. Yudkowsky’s new book, “If Anyone Builds It, Everyone Dies.” The book, co-written with MIRI’s president, Nate Soares, is a distilled, mass-market version of the case they have been making to A.I. insiders for years.

Their goal is to stop the development of A.I. — and the stakes, they say, are existential...

And what about the good things that A.I. can do? Wouldn’t shutting down A.I. development also mean delaying cures for diseases, A.I. tutors for students and other benefits?

“We totally acknowledge the good effects,” he replied. “Yep, these things could be great tutors. Yep, these things sure could be useful in drug discovery. Is that worth exterminating all life on Earth? No.”"