Showing posts with label OpenAI. Show all posts
Showing posts with label OpenAI. Show all posts

Monday, November 3, 2025

Japanese Companies Tell OpenAI to Stop Infringing On Its IP; Gizmodo, November 2, 2025

 , Gizmodo; Japanese Companies Tell OpenAI to Stop Infringing On Its IP

"The Content Overseas Distribution Association (CODA), which represents several major Japanese entertainment companies such as TV studio Toei and game developer Square Enix, recently sent a written request calling on OpenAI to end its unauthorized use of their IP to train its recently launched Sora 2 generative AI.

Nearly 20 co-signers have accused the tech company of copyright infringement, alleging a “large portion” of Sora 2 content “closely resembles Japanese content or images [as] a result of using Japanese content as machine learning data.” The letter mentioned OpenAI’s policy of using copyrighted works unless the owner explicitly asks to opt out, but argues under Japanese law, it should instead be an opt-in system, since permission for copyrighted works is generally required beforehand."

Thursday, October 30, 2025

AI psychosis is a growing danger. ChatGPT is moving in the wrong direction; The Guardian, October 28, 2025

  , The Guardian; AI psychosis is a growing danger. ChatGPT is moving in the wrong direction


[Kip Currier: Note this announcement that OpenAI's Sam Altman made on October 14. It's billionaire CEO-speak for "acceptable risk", i.e. "The level of potential losses a society or community considers acceptable given existing social, economic, political, cultural, technical, and environmental conditions." https://inee.org/eie-glossary/acceptable-risk 

Translation: Altman's conflict of interest-riven assessment that AI's benefits outweigh a corpus of evidence establishing increasingly documented risks and harms of AI to the mental health of young children, teens, and adults.]


[Excerpt]

"On 14 October 2025, the CEO of OpenAI made an extraordinary announcement.

“We made ChatGPT pretty restrictive,” it says, “to make sure we were being careful with mental health issues.”

As a psychiatrist who studies emerging psychosis in adolescents and young adults, this was news to me.

Researchers have identified 16 cases in the media this year of individuals developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. My group has since identified four more. In addition to these is the now well-known case of a 16-year-old who died by suicide after discussing his plans extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “being careful with mental health issues”, that’s not good enough.

The plan, according to his announcement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s restrictions “made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”

“Mental health problems”, if we accept this framing, are independent of ChatGPT. They belong to users, who either have them or don’t. Fortunately, these problems have now been “mitigated”, though we are not told how (by “new tools” Altman presumably means the semi-functional and easily circumvented parental controls that OpenAI recently introduced)."

Wednesday, October 29, 2025

Big Tech Makes Cal State Its A.I. Training Ground; The New York Times, October 26, 2025

 

, The New York Times ; Big Tech Makes Cal State Its A.I. Training Ground

"Cal State, the largest U.S. university system with 460,000 students, recently embarked on a public-private campaign — with corporate titans including Amazon, OpenAI and Nvidia — to position the school as the nation’s “first and largest A.I.-empowered” university. One central goal is to make generative A.I. tools, which can produce humanlike texts and images, available across the school’s 22 campuses. Cal State also wants to embed chatbots in teaching and learning, and prepare students for “increasingly A.I.-driven”careers.

As part of the effort, the university is paying OpenAI $16.9 million to provide ChatGPT Edu, the company’s tool for schools, to more than half a million students and staff — which OpenAI heralded as the world’s largest rollout of ChatGPT to date. Cal State also set up an A.I. committee, whose members include representatives from a dozen large tech companies, to help identify the skills California employers need and improve students’ career opportunities."

Tuesday, October 28, 2025

Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users; Santa Clara University, Friday, November 7, 2025 12 Noon PST, 3 PM EST

 Santa Clara University ; Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users

"A number of recent articles, in The New York Times and elsewhere, have described the experience of “chatbot psychosis” that some people develop as they interact with services like ChatGPT. What do we know about chatbot psychosis? Is there a trend of such psychosis at scale? What do you learn if you sift through over one million words comprising one such experience? And what are some practical steps that companies can take to protect their users and reduce the risk of such episodes?

A computer scientist with a background in economics, Steven Adler started to focus on AI risk topics (and AI broadly) a little over a decade ago, and worked at OpenAI from late 2020 through 2024, leading various safety-related research projects and products there. He now writes about what’s happening in AI safety–and argues that safety and technological progress can very much complement each other, and in fact require each other, if the goal is to unlock the uses of AI that people want."

OpenAI loses bid to dismiss part of US authors' copyright lawsuit; Reuters, October 28, 2025

 , Reuters; OpenAI loses bid to dismiss part of US authors' copyright lawsuit

"A New York federal judge has denied OpenAI's early request to dismiss authors' claims that text generated by OpenAI's artificial intelligence chatbot ChatGPT infringes their copyrights.

U.S. District Judge Sidney Stein said on Monday that the authors may be able to prove the text ChatGPT produces is similar enough to their work to violate their book copyrights."

Tuesday, October 21, 2025

It’s Still Ludicrously Easy to Generate Copyrighted Characters on ChatGPT; Futurism, October 18, 2025

 , Futurism; It’s Still Ludicrously Easy to Generate Copyrighted Characters on ChatGPT

"Forget Sora for just a second, because it’s still ludicrously easy to generate copyrighted characters using ChatGPT.

These include characters that the AI initially refuses to generate due to existing copyright, underscoring how OpenAI is clearly aware of how bad this looks — but is either still struggling to rein in its tech, figures it can get away with playing fast and loose with copyright law, or both.

When asked to “generate a cartoon image of Snoopy,” for instance, GPT-5 says it “can’t create or recreate copyrighted characters” — but it does offer to generate a “beagle-styled cartoon dog inspired by Snoopy’s general aesthetic.” Wink wink.

We didn’t go down that route, because even slightly rephrasing the request allowed us to directly get a pic of the iconic Charles Schultz character. “Generate a cartoon image of Snoopy in his original style,” we asked — and with zero hesitation, ChatGPT produced the spitting image of the “Peanuts” dog, looking like he was lifted straight from a page of the comic-strip."

Saturday, October 18, 2025

OpenAI Blocks Videos of Martin Luther King Jr. After Racist Depictions; The New York Times, October 17, 2025

, The New York Times ; OpenAI Blocks Videos of Martin Luther King Jr. After Racist Depictions


[Kip Currier: This latest tech company debacle is another example of breakdowns in technology design thinking and ethical leadership. No one in all of OpenAI could foresee that Sora 2.0 might be used in these ways? Or they did but didn't care? Either way, this is morally reckless and/or negligent conduct.

The leaders and design folks at OpenAI (and other tech companies) would be well-advised to look at Tool 6 in An Ethical Toolkit for Engineering/Design Practice, created by Santa Clara University Markkula Center for Applied Ethics:

Tool 6: Think About the Terrible People: Positive thinking about our work, as Tool 5 reminds us, is an important part of ethical design. But we must not envision our work being used only by the wisest and best people, in the wisest and best ways. In reality, technology is power, and there will always be those who wish to abuse that power. This tool helps design teams to manage the risks associated with technology abuse.

https://www.scu.edu/ethics-in-technology-practice/ethical-toolkit/

The "Move Fast and Break Things" ethos is alive and well in Big Tech.]


[Excerpt]

"OpenAI said Thursday that it was blocking people from creating videos using the image of the Rev. Dr. Martin Luther King Jr. with its Sora app after users created vulgar and racist depictions of him.

The company said it had made the decision at the request of the King Center as well as Dr. Bernice King, the civil rights leader’s daughter, who had objected to the videos.

The announcement was another effort by OpenAI to respond to criticism of its tools, which critics say operate with few safeguards.

“Some users generated disrespectful depictions of Dr. King’s image,” OpenAI said in a statement. “OpenAI has paused generations depicting Dr. King as it strengthens guardrails for historical figures.”"

Wednesday, October 15, 2025

Hollywood-AI battle deepens, as OpenAI and studios clash over copyrights and consent; Los Angeles Times, October 11, 2025

Wendy Lee and Samantha Masunaga, Los Angeles Times; Hollywood-AI battle deepens, as OpenAI and studios clash over copyrights and consent

  • "OpenAI’s new Sora 2 tool allows users to put real people and characters into AI-generated videos, sparking immediate backlash from Hollywood studios and talent agencies.
  • The dispute centers on who controls copyrighted images and likenesses, with Hollywood arguing OpenAI cannot use content without explicit permission or compensation.
  • The clash between Silicon Valley’s “move fast and break things” ethos and Hollywood’s intellectual property protections could shape the future of AI in entertainment."

Sunday, October 12, 2025

OpenAI Risks Billions as Court Weighs Privilege in Copyright Row; Bloomberg Law, October 10, 2025

  

, Bloomberg Law; OpenAI Risks Billions as Court Weighs Privilege in Copyright Row

"Authors and publishers suing the artificial intelligence giant have secured access to some Slack messages and emails discussing OpenAI’s deletion of a dataset containing pirated books and are seeking additional attorney communications about the decision. If they succeed, the communications could demonstrate willful infringement, triggering enhanced damages of as much as $150,000 per work...

The US District Court for the Southern District of New York last week ordered OpenAI to turn over most employee communications about the data deletion that the AI company argued were protected by attorney-client privilege. OpenAI may appeal the decision. A separate bid for OpenAI’s correspondence with in-house and outside attorneys remains pending."

Saturday, October 11, 2025

AI videos of dead celebrities are horrifying many of their families; The Washington Post, October 11, 2025

 

, The Washington Post; AI videos of dead celebrities are horrifying many of their families


[Kip Currier: OpenAI CEO Sam Altman's reckless actions in releasing Sora 2.0 without guardrails and accountability mechanisms exemplify Big Tech's ongoing Zuckerberg-ian "Move Fast and Break Thingsmodus operandi in the AI Age. 

Altman also had to recently walk back his ill-conceived directive that copyright holders would need to opt-out of having their copyrighted works used as AI training data (yet again!), rather than the burden being on OpenAI to secure their opt-ins through licensing.

To learn more about potential further copyright-related questionable conduct by OpenAI, read this 10/10/25 Bloomberg Law article:  OpenAI Risks Billions as Court Weighs Privilege in Copyright Row]

[Excerpt]

"OpenAI said the text-to-video tool would depict real people only with their consent. But it exempted “historical figures” from these limits during its launch last week, allowing anyone to make fake videos resurrecting public figures, including activists, celebrities and political leaders — and leaving some of their relatives horrified.

“It is deeply disrespectful and hurtful to see my father’s image used in such a cavalier and insensitive manner when he dedicated his life to truth,” Shabazz, whose father was assassinated in front of her in 1965 when she was 2, told The Washington Post. She questioned why the developers were not acting “with the same morality, conscience, and care … that they’d want for their own families.”

Sora’s videos have sparked agitation and disgust from many of the depicted celebrities’ loved ones, including actor Robin Williams’s daughter, Zelda Williams, who pleaded in an Instagram post recently for people to “stop sending me AI videos of dad.”"

OpenAI’s Sora Is in Serious Trouble; Futurism, October 10, 2025

, Futurism ; OpenAI’s Sora Is in Serious Trouble

"The cat was already out of the bag, though, sparking what’s likely to be immense legal drama for OpenAI. On Monday, the Motion Picture Association, a US trade association that represents major film studios, released a scorching statementurging OpenAI to “take immediate and decisive action” to stop the app from infringing on copyrighted media.

Meanwhile, OpenAI appears to have come down hard on what kind of text prompts can be turned into AI slop on Sora, implementing sweeping new guardrails presumably meant to appease furious rightsholders and protect their intellectual property.

As a result, power users experienced major whiplash that’s tarnishing the launch’s image even among fans. It’s a lose-lose moment for OpenAI’s flashy new app — either aggravate rightsholders by allowing mass copyright infringement, or turn it into yet another mind-numbing screensaver-generating experience like Meta’s widely mocked Vibes.

“It’s official, Sora 2 is completely boring and useless with these copyright restrictions. Some videos should be considered fair use,” one Reddit user lamented.

Others accused OpenAI of abusing copyright to hype up its new app...

How OpenAI’s eyebrow-raising ask-for-forgiveness-later approach to copyright will play out in the long term remains to be seen. For one, the company may already be in hot water, as major Hollywood studios have already started suing over less."

Friday, October 10, 2025

You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out; Gizmodo, October 8, 2025

, Gizmodo; You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out

 "OpenAI may be able to appease copyright holders by shifting its Sora policies, but it’s now pissed off its users. As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can’t make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was “the only reason this app was so fun.” Another claimed, “Moral policing and leftist ideology are destroying America’s AI industry.” So, you know, it seems like they’re handling this well."

It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards?; The Guardian, October 10, 2025

  , The Guardian; It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards?

"I’ve seen it said that OpenAI’s motto should be “better to beg forgiveness than ask permission”, but that cosies it preposterously. Its actual motto seems to be “we’ll do what we want and you’ll let us, bitch”. Consider Altman’s recent political journey. “To anyone familiar with the history of Germany in the 1930s,” Sam warned in 2016, “it’s chilling to watch Trump in action.” He seems to have got over this in time to attend Donald Trump’s second inauguration, presumably because – if we have to extend his artless and predictable analogy – he’s now one of the industrialists welcome in the chancellery to carve up the spoils. “Thank you for being such a pro-business, pro-innovation president,” Sam simpered to Trump at a recent White House dinner for tech titans. “It’s a very refreshing change.” Inevitably, the Trump administration has refused to bring forward any AI regulation at all.

Meanwhile, please remember something Sam and his ironicidal maniacs said earlier this year, when it was suggested that the Chinese AI chatbot DeepSeek might have been trained on some of OpenAI’s work. “We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models, and will share information as we know more,” his firm’s anguished statement ran. “We take aggressive, proactive countermeasures to protect our technology.” Hilariously, it seemed that the last entity on earth with the power to fight AI theft was OpenAI."

Wednesday, October 8, 2025

OpenAI wasn’t expecting Sora’s copyright drama; The Verge, October 8, 2025

Hayden Field , The Verge; OpenAI wasn’t expecting Sora’s copyright drama

"When OpenAI released its new AI-generated video app Sora last week, it launched with an opt-out policy for copyright holders — media companies would need to expressly indicate they didn’t want their AI-generated characters running rampant on the app. But after days of Nazi SpongeBob, criminal Pikachu, and Sora-philosophizing Rick and Morty, OpenAI CEO Sam Altman announced the company would reverse course and “let rightsholders decide how to proceed.”

In response to a question about why OpenAI changed its policy, Altman said that it came from speaking with stakeholders and suggested he hadn’t expected the outcry.

“I think the theory of what it was going to feel like to people, and then actually seeing the thing, people had different responses,” Altman said. “It felt more different to images than people expected.”

Sunday, October 5, 2025

OpenAI hastily retreats from gung-ho copyright policy after embarrassing Sora video output like AI Sam Altman surrounded by Pokémon saying 'I hope Nintendo doesn't sue us'; PC Gamer, October 5, 2025

, PC Gamer ; OpenAI hastily retreats from gung-ho copyright policy after embarrassing Sora video output like AI Sam Altman surrounded by Pokémon saying 'I hope Nintendo doesn't sue us'

"This video is just one of many examples, but you'll have a much harder time finding Sora-generated videos containing Marvel or Disney characters. As reported by Automaton, Sora appears to be refusing prompts containing references to American IP, but Japanese IP didn't seem to be getting the same treatment over the past week.

Japanese lawyer and House of Representatives member Akihisa Shiozaki called for action to protect creatives in a post on X (formerly Twitter), which has been translated by Automaton: "I’ve tried out [Sora 2] myself, but I felt that it poses a serious legal and political problem. We need to take immediate action if we want to protect leading Japanese creators and the domestic content industry, and help them further develop. (I wonder why Disney and Marvel characters can’t be displayed).""

Saturday, October 4, 2025

Sam Altman says Sora will add ‘granular,’ opt-in copyright controls; TechCrunch, October 4, 2025

Anthony Ha , TechCrunch; Sam Altman says Sora will add ‘granular,’ opt-in copyright controls

"OpenAI may be reversing course on how it approaches copyright and intellectual property in its new video app Sora.

Prior to Sora’s launch this week, The Wall Street Journal reported that OpenAI had been telling Hollywood studios and agencies that they needed to explicitly opt out if they didn’t want their IP to be included in Sora-generated videos.

Despite being invite-only, the app quickly climbed to the top of the App Store charts. Sora’s most distinctive feature may be its “cameos,” where users can upload their biometric data to see their digital likeness featured in AI-generated videos.

At the same time, users also seem to delight in flouting copyright laws by creating videos with popular, studio-owned characters. In some cases, those characters might even criticize the company’s approach to copyright, for example in videos where Pikachu and SpongeBob interact with deepfakes of OpenAI CEO Sam Altman.

In a blog post published Friday, Altman said the company is already planning two changes to Sora, first by giving copyright holders “more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls.”"

Tuesday, September 30, 2025

OpenAI's new Sora video generator to require copyright holders to opt out, WSJ reports; Reuters, September 29, 2025

Reuters; OpenAI's new Sora video generator to require copyright holders to opt out, WSJ reports

"OpenAI is planning to release a new version of its Sora generator that creates videos featuring copyrighted material, unless rights holders opt out of having their work appear, the Wall Street Journal reported on Monday, citing people familiar with the matter.

The artificial intelligence startup began notifying talent agencies and studios over the past week about the opt-out process and the product, which it plans to release in the coming days, the report said.

The new process would mean movie studios and other intellectual property owners would have to explicitly ask OpenAI not to include their copyrighted material in videos Sora creates, according to the report."

Monday, August 11, 2025

Boston Public Library aims to increase access to a vast historic archive using AI; NPR, August 11, 2025

 , NPR ; Boston Public Library aims to increase access to a vast historic archive using AI

"Boston Public Library, one of the oldest and largest public library systems in the country, is launching a project this summer with OpenAI and Harvard Law School to make its trove of historically significant government documents more accessible to the public.

The documents date back to the early 1800s and include oral histories, congressional reports and surveys of different industries and communities...

Currently, members of the public who want to access these documents must show up in person. The project will enhance the metadata of each document and will enable users to search and cross-reference entire texts from anywhere in the world. 

Chapel said Boston Public Library plans to digitize 5,000 documents by the end of the year, and if all goes well, grow the project from there...

Harvard University said it could help. Researchers at the Harvard Law School Library's Institutional Data Initiative are working with libraries, museums and archives on a number of fronts, including training new AI models to help libraries enhance the searchability of their collections. 

AI companies help fund these efforts, and in return get to train their large language models on high-quality materials that are out of copyright and therefore less likely to lead to lawsuits. (Microsoft and OpenAI are among the many AI players targeted by recent copyright infringement lawsuits, in which plaintiffs such as authors claim the companies stole their works without permission.)"

Sunday, July 20, 2025

AI guzzled millions of books without permission. Authors are fighting back.; The Washington Post, July 19, 2025

  , The Washington Post; AI guzzled millions of books without permission. Authors are fighting back.


[Kip Currier: I've written this before on this blog and I'll say it again: technology companies would never allow anyone to freely vacuum up their content and use it without permission or compensation. Period. Full Stop.]


[Excerpt]

"Baldacci is among a group of authors suing OpenAI and Microsoft over the companies’ use of their work to train the AI software behind tools such as ChatGPT and Copilot without permission or payment — one of more than 40 lawsuits against AI companies advancing through the nation’s courts. He and other authors this week appealed to Congress for help standing up to what they see as an assault by Big Tech on their profession and the soul of literature.

They found sympathetic ears at a Senate subcommittee hearing Wednesday, where lawmakers expressed outrage at the technology industry’s practices. Their cause gained further momentum Thursday when a federal judge granted class-action status to another group of authors who allege that the AI firm Anthropic pirated their books.

“I see it as one of the moral issues of our time with respect to technology,” Ralph Eubanks, an author and University of Mississippi professor who is president of the Authors Guild, said in a phone interview. “Sometimes it keeps me up at night.”

Lawsuits have revealed that some AI companies had used legally dubious “torrent” sites to download millions of digitized books without having to pay for them."

Wednesday, July 16, 2025

The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies; Task & Purpose, July 14, 2025

  , Task & Purpose; The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies

"The Pentagon announced Monday it is going to spend almost $1 billion on “agentic AI workflows” from four “frontier AI” companies, including Elon Musk’s xAI, whose flagship Grok appeared to still be declaring itself “MechaHitler” as late as Monday afternoon.

In a press release, the Defense Department’s Chief Digital and Artificial Intelligence Office — or CDAO — said it will cut checks of up to $200 million each to tech giants Anthropic, Google, OpenAI and Musk’s xAI to work on:

  • “critical national security challenges;”
  • “joint mission essential tasks in our warfighting domain;”
  • “DoD use cases.”

The release did not expand on what any of that means or how AI might help. Task & Purpose reached out to the Pentagon for details on what these AI agents may soon be doing and asked specifically if the contracts would include control of live weapons systems or classified information."