Showing posts with label Sam Altman. Show all posts
Showing posts with label Sam Altman. Show all posts

Wednesday, March 11, 2026

Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism; The Guardian, March 4, 2026

 , The Guardian ; Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism

"penAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time.

A grassroots boycott called QuitGPT has been spreading across the US and beyond, asking people to cancel their ChatGPT subscriptions. More than a million people have answered the call. Mark Ruffalo and Katy Perry have thrown their weight behind it. It is one of the most significant consumer boycotts in recent memory, and I believe it’s time for Europeans to join...

In contrast, cancelling ChatGPT is a piece of cake. You can do it in 10 seconds, and the alternatives are just as good or even better. History shows why #QuitGPT has so much potential: effective campaigns such as the 1977 NestlĂ© boycott and the 2023 Bud Light boycott were successful because they were narrow and easy. They had a clear target and people had lots of good alternatives.

The great boycotts of history did not succeed because millions of people suddenly became heroic activists. They succeeded because buying a different brand of coffee, or choosing a different beer, was something anyone could do on a Tuesday afternoon. The small act, repeated at scale, becomes a political earthquake.

Go to quitgpt.org. Cancel your subscription. Using the free version? Delete the app, because your conversations still feed the machine. Then try an alternative, and tell at least one person why.

OpenAI’s president bet $25m that you would not notice where your money was going, and that, even if you did, you would not care enough to spend 10 seconds switching to something else. Time to prove him wrong."

Sunday, March 8, 2026

Anthropic’s Ethical Stand Could Be Paying Off; The Atlantic, March 7, 2026

Ken Harbaugh, The Atlantic; Anthropic’s Ethical Stand Could Be Paying Off

"The events of the past week reminded me of my early days as a Navy pilot nearly three decades ago. One of my first tasks was to sign a document pledging never to surveil American citizens. By the time of the 9/11 attacks, I was an aircraft commander, leading combat-reconnaissance aircrews that gathered large-scale intelligence and informed battlefield targeting decisions. I took for granted that somewhere along those decision chains, a human being was in the loop.

I could not have defined artificial intelligence then, but I understood instinctively that a person, not a machine, would bear the weight of life-and-death choices. This was not a bureaucratic consideration. It was a hard line that those of us in uniform were expected to hold.

In the standoff between Anthropic and the Pentagon, a private company was forced to hold the line against its own government. In doing so, Anthropic may have earned something more valuable than the contract it lost. In an industry where trust is the scarcest resource, Anthropic just banked a substantial deposit."

Tuesday, March 3, 2026

The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’; The Conversation, March 1, 2026

 Lecturer, International Relations, Deakin University, The Conversation ; The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’


"In the leadup to the weekend’s US and Israeli attacks on Iran, the US Department of Defense was locked in tense negotiations with artificial intelligence (AI) company Anthropic over exactly how the Pentagon could use the firm’s technology.

Anthropic wanted guarantees its Claude systems would not be used for purposes such as domestic surveillance in the US and operating autonomous weapons without human control. 

In response, US president Donald Trump on Friday directed all US federal agencies to cease using Anthropic’s technology, saying he would “never allow a radical left, woke company to dictate how our great military fights and wins wars!”

Hours later, rival AI lab OpenAI (maker of ChatGPT) announced it had struck its own deal with the Department of Defense. The key difference appears to be that OpenAI permits “all lawful uses” of its tools, without specifying ethical lines OpenAI won’t cross.

What does this mean for military AI? Is it the end for the idea of “ethical AI” in warfare?"

Monday, March 2, 2026

'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military; TechRadar,March 1, 2026

  , TechRadar ; 'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military

"After Claude developer Anthropic walked away from a deal with the US Department of War over safety and security concerns, OpenAI has decided to sign an agreement with the military – and ChatGPT users are far from happy about it.

As reported by Windows Central, a growing number of people are canceling their ChatGPT subscriptions and switching to other AI chatbots instead, including Claude. A quick browse of social media or Reddit is enough to see that there's a growing backlash to the move.

Some Redditors are posting guides to extracting yourself and your data from ChatGPT, while others are accusing OpenAI of having "no ethics at all" and "selling their soul" by agreeing to allow their AI models to be used by the US military complex."

Sunday, March 1, 2026

OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns; The Guardian, February 28, 2026

  and , The Guardian; OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns

CEO Sam Altman claims military will not use AI product for autonomous killing systems or mass surveillance

"OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks, hours after Donald Trump ordered the government to stop using the services of one of the company’s main competitors.

Sam Altman, OpenAI’s CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance – nor for autonomous weapons systems that can kill people without human input.

Announcing the deal, Altman insisted that OpenAI’s agreement with the government included assurances that it would not be used to those ends.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. He added that the Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement”.

Altman also said he hoped the Pentagon would “offer these same terms to all AI companies” as a way to “de-escalate away from legal and governmental actions and toward reasonable agreements”."

Saturday, February 28, 2026

OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash; The New York Times, February 27, 2026

  , The New York Times; OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash

"OpenAI, the maker of ChatGPT, said on Friday that it had reached an agreement with the Pentagon to provide its artificial intelligence technologies for classified systems, just hours after President Trump ordered federal agencies to stop using A.I. technology made by rival Anthropic.

Under the deal, OpenAI agreed to let the Pentagon use its A.I. systems for any lawful purpose, a term required by the Pentagon. But OpenAI also said it had found a way to ensure that its technologies would adhere to its safety principles by installing specific technical guardrails on its systems."

Friday, January 9, 2026

Thursday, October 30, 2025

AI psychosis is a growing danger. ChatGPT is moving in the wrong direction; The Guardian, October 28, 2025

  , The Guardian; AI psychosis is a growing danger. ChatGPT is moving in the wrong direction


[Kip Currier: Note this announcement that OpenAI's Sam Altman made on October 14. It's billionaire CEO-speak for "acceptable risk", i.e. "The level of potential losses a society or community considers acceptable given existing social, economic, political, cultural, technical, and environmental conditions." https://inee.org/eie-glossary/acceptable-risk 

Translation: Altman's conflict of interest-riven assessment that AI's benefits outweigh a corpus of evidence establishing increasingly documented risks and harms of AI to the mental health of young children, teens, and adults.]


[Excerpt]

"On 14 October 2025, the CEO of OpenAI made an extraordinary announcement.

“We made ChatGPT pretty restrictive,” it says, “to make sure we were being careful with mental health issues.”

As a psychiatrist who studies emerging psychosis in adolescents and young adults, this was news to me.

Researchers have identified 16 cases in the media this year of individuals developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. My group has since identified four more. In addition to these is the now well-known case of a 16-year-old who died by suicide after discussing his plans extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “being careful with mental health issues”, that’s not good enough.

The plan, according to his announcement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s restrictions “made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”

“Mental health problems”, if we accept this framing, are independent of ChatGPT. They belong to users, who either have them or don’t. Fortunately, these problems have now been “mitigated”, though we are not told how (by “new tools” Altman presumably means the semi-functional and easily circumvented parental controls that OpenAI recently introduced)."

Saturday, October 11, 2025

AI videos of dead celebrities are horrifying many of their families; The Washington Post, October 11, 2025

 

, The Washington Post; AI videos of dead celebrities are horrifying many of their families


[Kip Currier: OpenAI CEO Sam Altman's reckless actions in releasing Sora 2.0 without guardrails and accountability mechanisms exemplify Big Tech's ongoing Zuckerberg-ian "Move Fast and Break Thingsmodus operandi in the AI Age. 

Altman also had to recently walk back his ill-conceived directive that copyright holders would need to opt-out of having their copyrighted works used as AI training data (yet again!), rather than the burden being on OpenAI to secure their opt-ins through licensing.

To learn more about potential further copyright-related questionable conduct by OpenAI, read this 10/10/25 Bloomberg Law article:  OpenAI Risks Billions as Court Weighs Privilege in Copyright Row]

[Excerpt]

"OpenAI said the text-to-video tool would depict real people only with their consent. But it exempted “historical figures” from these limits during its launch last week, allowing anyone to make fake videos resurrecting public figures, including activists, celebrities and political leaders — and leaving some of their relatives horrified.

“It is deeply disrespectful and hurtful to see my father’s image used in such a cavalier and insensitive manner when he dedicated his life to truth,” Shabazz, whose father was assassinated in front of her in 1965 when she was 2, told The Washington Post. She questioned why the developers were not acting “with the same morality, conscience, and care … that they’d want for their own families.”

Sora’s videos have sparked agitation and disgust from many of the depicted celebrities’ loved ones, including actor Robin Williams’s daughter, Zelda Williams, who pleaded in an Instagram post recently for people to “stop sending me AI videos of dad.”"

OpenAI’s Sora Is in Serious Trouble; Futurism, October 10, 2025

, Futurism ; OpenAI’s Sora Is in Serious Trouble

"The cat was already out of the bag, though, sparking what’s likely to be immense legal drama for OpenAI. On Monday, the Motion Picture Association, a US trade association that represents major film studios, released a scorching statementurging OpenAI to “take immediate and decisive action” to stop the app from infringing on copyrighted media.

Meanwhile, OpenAI appears to have come down hard on what kind of text prompts can be turned into AI slop on Sora, implementing sweeping new guardrails presumably meant to appease furious rightsholders and protect their intellectual property.

As a result, power users experienced major whiplash that’s tarnishing the launch’s image even among fans. It’s a lose-lose moment for OpenAI’s flashy new app — either aggravate rightsholders by allowing mass copyright infringement, or turn it into yet another mind-numbing screensaver-generating experience like Meta’s widely mocked Vibes.

“It’s official, Sora 2 is completely boring and useless with these copyright restrictions. Some videos should be considered fair use,” one Reddit user lamented.

Others accused OpenAI of abusing copyright to hype up its new app...

How OpenAI’s eyebrow-raising ask-for-forgiveness-later approach to copyright will play out in the long term remains to be seen. For one, the company may already be in hot water, as major Hollywood studios have already started suing over less."

Friday, October 10, 2025

You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out; Gizmodo, October 8, 2025

, Gizmodo; You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out

 "OpenAI may be able to appease copyright holders by shifting its Sora policies, but it’s now pissed off its users. As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can’t make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was “the only reason this app was so fun.” Another claimed, “Moral policing and leftist ideology are destroying America’s AI industry.” So, you know, it seems like they’re handling this well."

It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards?; The Guardian, October 10, 2025

  , The Guardian; It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards?

"I’ve seen it said that OpenAI’s motto should be “better to beg forgiveness than ask permission”, but that cosies it preposterously. Its actual motto seems to be “we’ll do what we want and you’ll let us, bitch”. Consider Altman’s recent political journey. “To anyone familiar with the history of Germany in the 1930s,” Sam warned in 2016, “it’s chilling to watch Trump in action.” He seems to have got over this in time to attend Donald Trump’s second inauguration, presumably because – if we have to extend his artless and predictable analogy – he’s now one of the industrialists welcome in the chancellery to carve up the spoils. “Thank you for being such a pro-business, pro-innovation president,” Sam simpered to Trump at a recent White House dinner for tech titans. “It’s a very refreshing change.” Inevitably, the Trump administration has refused to bring forward any AI regulation at all.

Meanwhile, please remember something Sam and his ironicidal maniacs said earlier this year, when it was suggested that the Chinese AI chatbot DeepSeek might have been trained on some of OpenAI’s work. “We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models, and will share information as we know more,” his firm’s anguished statement ran. “We take aggressive, proactive countermeasures to protect our technology.” Hilariously, it seemed that the last entity on earth with the power to fight AI theft was OpenAI."

Wednesday, October 8, 2025

OpenAI wasn’t expecting Sora’s copyright drama; The Verge, October 8, 2025

Hayden Field , The Verge; OpenAI wasn’t expecting Sora’s copyright drama

"When OpenAI released its new AI-generated video app Sora last week, it launched with an opt-out policy for copyright holders — media companies would need to expressly indicate they didn’t want their AI-generated characters running rampant on the app. But after days of Nazi SpongeBob, criminal Pikachu, and Sora-philosophizing Rick and Morty, OpenAI CEO Sam Altman announced the company would reverse course and “let rightsholders decide how to proceed.”

In response to a question about why OpenAI changed its policy, Altman said that it came from speaking with stakeholders and suggested he hadn’t expected the outcry.

“I think the theory of what it was going to feel like to people, and then actually seeing the thing, people had different responses,” Altman said. “It felt more different to images than people expected.”

Sunday, October 5, 2025

OpenAI hastily retreats from gung-ho copyright policy after embarrassing Sora video output like AI Sam Altman surrounded by Pokémon saying 'I hope Nintendo doesn't sue us'; PC Gamer, October 5, 2025

, PC Gamer ; OpenAI hastily retreats from gung-ho copyright policy after embarrassing Sora video output like AI Sam Altman surrounded by PokĂ©mon saying 'I hope Nintendo doesn't sue us'

"This video is just one of many examples, but you'll have a much harder time finding Sora-generated videos containing Marvel or Disney characters. As reported by Automaton, Sora appears to be refusing prompts containing references to American IP, but Japanese IP didn't seem to be getting the same treatment over the past week.

Japanese lawyer and House of Representatives member Akihisa Shiozaki called for action to protect creatives in a post on X (formerly Twitter), which has been translated by Automaton: "I’ve tried out [Sora 2] myself, but I felt that it poses a serious legal and political problem. We need to take immediate action if we want to protect leading Japanese creators and the domestic content industry, and help them further develop. (I wonder why Disney and Marvel characters can’t be displayed).""

Saturday, October 4, 2025

Sam Altman says Sora will add ‘granular,’ opt-in copyright controls; TechCrunch, October 4, 2025

Anthony Ha , TechCrunch; Sam Altman says Sora will add ‘granular,’ opt-in copyright controls

"OpenAI may be reversing course on how it approaches copyright and intellectual property in its new video app Sora.

Prior to Sora’s launch this week, The Wall Street Journal reported that OpenAI had been telling Hollywood studios and agencies that they needed to explicitly opt out if they didn’t want their IP to be included in Sora-generated videos.

Despite being invite-only, the app quickly climbed to the top of the App Store charts. Sora’s most distinctive feature may be its “cameos,” where users can upload their biometric data to see their digital likeness featured in AI-generated videos.

At the same time, users also seem to delight in flouting copyright laws by creating videos with popular, studio-owned characters. In some cases, those characters might even criticize the company’s approach to copyright, for example in videos where Pikachu and SpongeBob interact with deepfakes of OpenAI CEO Sam Altman.

In a blog post published Friday, Altman said the company is already planning two changes to Sora, first by giving copyright holders “more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls.”"

Sunday, June 8, 2025

OpenAI to appeal copyright ruling in NY Times case as Altman calls for 'AI privilege'; Foxbusiness, June 6, 2025

, Foxbusiness; OpenAI to appeal copyright ruling in NY Times case as Altman calls for 'AI privilege'

"The OpenAI co-founder said the case has accelerated the need for a conversation about "AI privilege," in which "talking to an AI should be like talking to a lawyer or a doctor.""

Sunday, December 8, 2024

Google CEO: AI development is finally slowing down—‘the low-hanging fruit is gone’; CNBC, December 8, 2024

Megan Sauer , CNBC; Google CEO: AI development is finally slowing down—‘the low-hanging fruit is gone’;

"Now, with the industry’s competitive landscape somewhat established — multiple big tech companies, including Google, have competing models — it’ll take time for another technological breakthrough to shock the AI industry into hyper-speed development again, Pichai said at the New York Times’ DealBook Summit last week.

“I think the progress is going to get harder. When I look at [2025], the low-hanging fruit is gone,” said Pichai, adding: “The hill is steeper ... You’re definitely going to need deeper breakthroughs as we get to the next stage.”...

Some tech CEOs, like Microsoft’s Satya Nadella, agree with Pichai. “Seventy years of the Industrial Revolution, there wasn’t much industry growth, and then it took off ... it’s never going to be linear,” Nadella saidat the Fast Company Innovation Festival 2024 in October.

Others disagree, at least publicly. OpenAI CEO Sam Altman, for example, posted “there is no wall” on social media platform X in November — a response to reports that the recently released ChatGPT-4 was only moderately better than previous models."