Showing posts with label AI-related violence. Show all posts
Showing posts with label AI-related violence. Show all posts

Monday, April 13, 2026

OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash; Los Angeles Times, April 13, 2026

 Queenie Wong , Los Angeles Times; OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash

"Hours after a Molotov cocktail was thrown at his San Francisco home, OpenAI Chief Executive Sam Altman addressed the criticism surrounding artificial intelligence that appears to have been the impetus for the attack. 

In a lengthy blog post, Altman shared a family photo of his husband and child, stating he hopes it might convince people not to repeat the attack despite their opinions on him.

The San Francisco Police Department arrested a 20-year-old man in connection with the Friday morning attack but did not publicly comment on the motivation. Altman and his company, the maker of ChatGPT, have been at the center of a heated debate about whether AI will change the world for better or worse."

Sam Altman May Control Our Future—Can He Be Trusted?; The New Yorker, April 6, 2026

 and , The New Yorker; Sam Altman May Control Our Future—Can He Be Trusted?

"Not all the tendencies that make chatbots dangerous are glitches; some are by-products of how the systems are built. Large language models are trained, in part, on human feedback, and humans tend to prefer agreeable responses. Models often learn to flatter users, a tendency known as sycophancy, and will sometimes prioritize this over honesty. Models can also make things up, a tendency known as hallucination. Major A.I. labs have documented these problems, but they sometimes tolerate them. As models have grown more complex, some hallucinate with more persuasive fabrications. In 2023, shortly before his firing, Altman argued that allowing for some falsehoods can, whatever the risks, confer advantages. “If you just do the naïve thing and say, ‘Never say anything that you’re not a hundred per cent sure about,’ you can get a model to do that,” he said. “But it won’t have the magic that people like so much.”"

The three realities of AI; Axios, April 13, 2026

 Ina Fried , Axios; The three realities of AI

"Three distinct camps are forming around AI: power users, doubters and resisters.

Why it matters: AI isn't just advancing — it's fragmenting how people see the world.

The big picture: The disconnect is showing up everywhere — from job-loss fears to data center protests to actual violence.


Doubters still see AI as glitchy chatbots and viral fails. They aren't using its full capabilities.


Power users run AI agents around the clock, trading tips on how to automate work and decision-making. 


Resisters understand AI, think they know where it's headed and want no part of it."