Showing posts with label AI doubters. Show all posts
Showing posts with label AI doubters. Show all posts

Monday, April 13, 2026

Sam Altman May Control Our Future—Can He Be Trusted?; The New Yorker, April 6, 2026

 and , The New Yorker; Sam Altman May Control Our Future—Can He Be Trusted?

"Not all the tendencies that make chatbots dangerous are glitches; some are by-products of how the systems are built. Large language models are trained, in part, on human feedback, and humans tend to prefer agreeable responses. Models often learn to flatter users, a tendency known as sycophancy, and will sometimes prioritize this over honesty. Models can also make things up, a tendency known as hallucination. Major A.I. labs have documented these problems, but they sometimes tolerate them. As models have grown more complex, some hallucinate with more persuasive fabrications. In 2023, shortly before his firing, Altman argued that allowing for some falsehoods can, whatever the risks, confer advantages. “If you just do the naïve thing and say, ‘Never say anything that you’re not a hundred per cent sure about,’ you can get a model to do that,” he said. “But it won’t have the magic that people like so much.”"

The three realities of AI; Axios, April 13, 2026

 Ina Fried , Axios; The three realities of AI

"Three distinct camps are forming around AI: power users, doubters and resisters.

Why it matters: AI isn't just advancing — it's fragmenting how people see the world.

The big picture: The disconnect is showing up everywhere — from job-loss fears to data center protests to actual violence.


Doubters still see AI as glitchy chatbots and viral fails. They aren't using its full capabilities.


Power users run AI agents around the clock, trading tips on how to automate work and decision-making. 


Resisters understand AI, think they know where it's headed and want no part of it."