Showing posts with label OpenAI. Show all posts
Showing posts with label OpenAI. Show all posts

Sunday, December 14, 2025

The Disney-OpenAI tie-up has huge implications for intellectual property; Fast Company, December 11, 2025

CHRIS STOKEL-WALKER, Fast Company ; The Disney-OpenAI tie-up has huge implications for intellectual property

"Walt Disney and OpenAI make for very odd bedfellows: The former is one of the most-recognized brands among children under the age of 18. The near-$200 billion company’s value has been derived from more than a century of aggressive safeguarding of its intellectual property and keeping the magic alive among innocent children.

OpenAI, which celebrated its first decade of existence this week, is best known for upending creativity, the economy, and society with its flagship product, ChatGPT. And in the last two months, it has said it wants to get to a place where its adult users can use its tech to create erotica.

So what the hell should we make of a just-announced deal between the two that will allow ChatGPT and Sora users to create images and videos of more than 200 characters, from Mickey and Minnie Mouse to the Mandalorian, starting from early 2026?"


Friday, December 12, 2025

The Disney-OpenAI Deal Redefines the AI Copyright War; Wired, December 11, 2025

BRIAN BARRETT, Wired; The Disney-OpenAI Deal Redefines the AI Copyright War

 "“I think that AI companies and copyright holders are beginning to understand and become reconciled to the fact that neither side is going to score an absolute victory,” says Matthew Sag, a professor of law and artificial intelligence at Emory University. While many of these cases are still working their way through the courts, so far it seems like model inputs—the training data that these models learn from—are covered by fair use. But this deal is about outputs—what the model returns based on your prompt—where IP owners like Disney have a much stronger case

Coming to an output agreement resolves a host of messy, potentially unsolvable issues. Even if a company tells an AI model not to produce, say, Elsa at a Wendy’s drive-through, the model might know enough about Elsa to do so anyway—or a user might be able to prompt their way into making Elsa without asking for the character by name. It’s a tension that legal scholars call the “Snoopy problem,” but in this case you might as well call it the Disney problem.

“Faced with this increasingly clear reality, it makes sense for consumer-facing AI companies and entertainment giants like Disney to think about licensing arrangements,” says Sag."

Disney's deal with OpenAI is about controlling the future of copyright; engadget, December 11, 2025

Igor Bonifacic, engadget; Disney's deal with OpenAI is about controlling the future of copyright

"The agreement brings together two parties with very different public stances on copyright. Before OpenAI released Sora, the company reportedly notified studios and talent agencies they would need to opt out of having their work appear in the new app. The company later backtracked on this stance. Before that, OpenAI admitted, in a regulatory filing, it would be "impossible to train today's leading AI models without using copyrighted materials."

By contrast, Disney takes copyright law very seriously. In fact, you could argue no other company has done more to shape US copyright law than Disney. For example, there's the Sonny Bono Copyright Term Extension Act, which is more derisively known as the Mickey Mouse Protection Act. The law effectively froze the advancement of the public domain in the United States, with Disney being the greatest beneficiary. It was only last year that the company's copyright for Steamboat Willie expired, 95 years after Walt Disney first created the iconic cartoon."

Thursday, December 11, 2025

Disney Agrees to Bring Its Characters to OpenAI’s Sora Videos; The New York Times, December 11, 2025

 , The New York Times; Disney Agrees to Bring Its Characters to OpenAI’s Sora Videos

"In a watershed moment for Hollywood and generative artificial intelligence, Disney on Thursday announced an agreement to bring its characters to Sora, OpenAI’s short-form video platform. Videos made with Sora will be available to stream on Disney+ as part of the three-year deal...

“The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling,” Robert A. Iger, the chief executive of Disney, said in a statement.

Disney is the first major Hollywood company to cross this particular Rubicon."

Wednesday, December 10, 2025

AI firms began to feel the legal wrath of copyright holders in 2025; NewScientist, December 10, 2025

Chris Stokel-Walker , NewScientist; AI firms began to feel the legal wrath of copyright holders in 2025

"The three years since the release of ChatGPT, OpenAI’s generative AI chatbot, have seen huge changes in every part of our lives. But one area that hasn’t changed – or at least, is still trying to maintain pre-AI norms – is the upholding of copyright law.

It is no secret that leading AI firms built their models by hoovering up data, including copyrighted material, from the internet without asking for permission first. This year, major copyright holders struck back, buffeting AI companies were with a range of lawsuits alleging copyright infringement."

Tuesday, December 9, 2025

A.I. Videos Have Flooded Social Media. No One Was Ready.; A.I. Videos Have Flooded Social Media. No One Was Ready., December 8, 2025

 Steven Lee Myers and , The New York Times ; A.I. Videos Have Flooded Social Media. No One Was Ready.

Apps like OpenAI’s Sora are fooling millions of users into thinking A.I. videos are real, even when they include warning labels.

"Videos like the fake interview above, created with OpenAI’s new app, Sora, show how easily public perceptions can be manipulated by tools that can produce an alternate reality with a series of simple prompts.

In the two months since Sora arrived, deceptive videos have surged on TikTok, X, YouTube, Facebook and Instagram, according to experts who track them. The deluge has raised alarm over a new generation of disinformation and fakes.

Most of the major social media companies have policies that require disclosure of artificial intelligence use and broadly prohibit content intended to deceive. But those guardrails have proved woefully inadequate for the kind of technological leaps OpenAI’s tools represent."

Thursday, December 4, 2025

OpenAI loses fight to keep ChatGPT logs secret in copyright case; Reuters, December 3, 2025

  , Reuters ; OpenAI loses fight to keep ChatGPT logs secret in copyright case

"OpenAI must produce millions of anonymized chat logs from ChatGPT users in its high-stakes copyright dispute with the New York Times and other news outlets, a federal judge in Manhattan ruled.

U.S. Magistrate Judge Ona Wang in a decision made public on Wednesday said that the 20 million logs were relevant to the outlets' claims and that handing them over would not risk violating users' privacy."

Tuesday, November 18, 2025

OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert; Bloomberg Law, November 18, 2025

 

Aruni Soni, Bloomberg Law; OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert

"OpenAI Inc. is banking on a privacy argument to block a court’s probe into millions of ChatGPT user conversations. 

That hasn’t worked so far as a winning legal strategy that can be used by other chatbot makers anticipating similar discovery demands in exploding chatbot-related litigation.

Instead, it threatens to turn attention to just how much information chatbots like ChatGPT are collecting and retaining about their users."

Saturday, November 15, 2025

We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for.; The Washington Post, November 12, 2025

 

, The Washington Post; We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for.

 OpenAI has largely promoted ChatGPT as a productivity tool, and in many conversations users asked for help with practical tasks such as retrieving information. But in more than 1 in 10 of the chats The Post analyzed, people engaged the chatbot in abstract discussions, musing on topics like their ideas for breakthrough medical treatments or personal beliefs about the nature of reality.

Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work. (The Post has a content partnership with OpenAI.)...

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”"

Thursday, November 13, 2025

OpenAI copyright case reveals 'ease with which generative AI can devastate the market', says PA; The Bookseller, November 12, 2025

MATILDA BATTERSBY , The Bookseller; OpenAI copyright case reveals 'ease with which generative AI can devastate the market', says PA

"A judge’s ruling that legal action by authors against OpenAI for copyright infringement can go ahead reveals “the ease with which generative AI can devastate the market”, according to the Publishers Association (PA).

Last week, a federal judge in the US refused OpenAI’s attempts to dismiss claims by authors that text summaries of published works by ChatGPT (which is owned by OpenAI) infringes their copyrights.

The lawsuit, which is being heard in New York, brings together cases from a number of authors, as well as the Authors Guild, filed in various courts.

In his ruling, which upheld the authors’ right to attempt to sue OpenAI, District Judge Sidney Stein compared George RR Martin’s Game of Thrones to summaries of the novel created by ChatGPT.

Judge Stein said: “[A] discerning observer could easily conclude that this detailed summary is substantially similar to Martin’s original work because the summary conveys the overall tone and feel of the original work by parroting the plot, characters and themes of the original.”

The class action consolidates 12 complaints being brought against OpenAI and Microsoft. It argues copyrighted books were reproduced to train OpenAI’s artificial intelligence large language models (LLM) and, crucially, that LLMs, including ChatGPT, can infringe copyright via their output, ie the text produced when asked a question.

This landmark legal case is the first to examine whether the output of an AI chatbot infringes copyright, rather than looking at whether the training of the model was an infringement."

Wednesday, November 12, 2025

OpenAI used song lyrics in violation of copyright laws, German court says; Reuters, November 11, 2025

  and , Reuters ; OpenAI used song lyrics in violation of copyright laws, German court says

"OpenAI's chatbot ChatGPT violated German copyright laws by reproducing lyrics from songs by best-selling musician Herbert Groenemeyer and others, a court ruled on Tuesday, in a closely watched case against the U.S. firm over its use of lyrics to train its language models.

The regional court in Munich found that the company trained its AI on protected content from nine German songs, including Groenemeyer's hits "Maenner" and "Bochum"."

Monday, November 3, 2025

Japanese Companies Tell OpenAI to Stop Infringing On Its IP; Gizmodo, November 2, 2025

 , Gizmodo; Japanese Companies Tell OpenAI to Stop Infringing On Its IP

"The Content Overseas Distribution Association (CODA), which represents several major Japanese entertainment companies such as TV studio Toei and game developer Square Enix, recently sent a written request calling on OpenAI to end its unauthorized use of their IP to train its recently launched Sora 2 generative AI.

Nearly 20 co-signers have accused the tech company of copyright infringement, alleging a “large portion” of Sora 2 content “closely resembles Japanese content or images [as] a result of using Japanese content as machine learning data.” The letter mentioned OpenAI’s policy of using copyrighted works unless the owner explicitly asks to opt out, but argues under Japanese law, it should instead be an opt-in system, since permission for copyrighted works is generally required beforehand."

Thursday, October 30, 2025

AI psychosis is a growing danger. ChatGPT is moving in the wrong direction; The Guardian, October 28, 2025

  , The Guardian; AI psychosis is a growing danger. ChatGPT is moving in the wrong direction


[Kip Currier: Note this announcement that OpenAI's Sam Altman made on October 14. It's billionaire CEO-speak for "acceptable risk", i.e. "The level of potential losses a society or community considers acceptable given existing social, economic, political, cultural, technical, and environmental conditions." https://inee.org/eie-glossary/acceptable-risk 

Translation: Altman's conflict of interest-riven assessment that AI's benefits outweigh a corpus of evidence establishing increasingly documented risks and harms of AI to the mental health of young children, teens, and adults.]


[Excerpt]

"On 14 October 2025, the CEO of OpenAI made an extraordinary announcement.

“We made ChatGPT pretty restrictive,” it says, “to make sure we were being careful with mental health issues.”

As a psychiatrist who studies emerging psychosis in adolescents and young adults, this was news to me.

Researchers have identified 16 cases in the media this year of individuals developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. My group has since identified four more. In addition to these is the now well-known case of a 16-year-old who died by suicide after discussing his plans extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “being careful with mental health issues”, that’s not good enough.

The plan, according to his announcement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s restrictions “made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”

“Mental health problems”, if we accept this framing, are independent of ChatGPT. They belong to users, who either have them or don’t. Fortunately, these problems have now been “mitigated”, though we are not told how (by “new tools” Altman presumably means the semi-functional and easily circumvented parental controls that OpenAI recently introduced)."

Wednesday, October 29, 2025

Big Tech Makes Cal State Its A.I. Training Ground; The New York Times, October 26, 2025

 

, The New York Times ; Big Tech Makes Cal State Its A.I. Training Ground

"Cal State, the largest U.S. university system with 460,000 students, recently embarked on a public-private campaign — with corporate titans including Amazon, OpenAI and Nvidia — to position the school as the nation’s “first and largest A.I.-empowered” university. One central goal is to make generative A.I. tools, which can produce humanlike texts and images, available across the school’s 22 campuses. Cal State also wants to embed chatbots in teaching and learning, and prepare students for “increasingly A.I.-driven”careers.

As part of the effort, the university is paying OpenAI $16.9 million to provide ChatGPT Edu, the company’s tool for schools, to more than half a million students and staff — which OpenAI heralded as the world’s largest rollout of ChatGPT to date. Cal State also set up an A.I. committee, whose members include representatives from a dozen large tech companies, to help identify the skills California employers need and improve students’ career opportunities."

Tuesday, October 28, 2025

Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users; Santa Clara University, Friday, November 7, 2025 12 Noon PST, 3 PM EST

 Santa Clara University ; Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users

"A number of recent articles, in The New York Times and elsewhere, have described the experience of “chatbot psychosis” that some people develop as they interact with services like ChatGPT. What do we know about chatbot psychosis? Is there a trend of such psychosis at scale? What do you learn if you sift through over one million words comprising one such experience? And what are some practical steps that companies can take to protect their users and reduce the risk of such episodes?

A computer scientist with a background in economics, Steven Adler started to focus on AI risk topics (and AI broadly) a little over a decade ago, and worked at OpenAI from late 2020 through 2024, leading various safety-related research projects and products there. He now writes about what’s happening in AI safety–and argues that safety and technological progress can very much complement each other, and in fact require each other, if the goal is to unlock the uses of AI that people want."

OpenAI loses bid to dismiss part of US authors' copyright lawsuit; Reuters, October 28, 2025

 , Reuters; OpenAI loses bid to dismiss part of US authors' copyright lawsuit

"A New York federal judge has denied OpenAI's early request to dismiss authors' claims that text generated by OpenAI's artificial intelligence chatbot ChatGPT infringes their copyrights.

U.S. District Judge Sidney Stein said on Monday that the authors may be able to prove the text ChatGPT produces is similar enough to their work to violate their book copyrights."

Tuesday, October 21, 2025

It’s Still Ludicrously Easy to Generate Copyrighted Characters on ChatGPT; Futurism, October 18, 2025

 , Futurism; It’s Still Ludicrously Easy to Generate Copyrighted Characters on ChatGPT

"Forget Sora for just a second, because it’s still ludicrously easy to generate copyrighted characters using ChatGPT.

These include characters that the AI initially refuses to generate due to existing copyright, underscoring how OpenAI is clearly aware of how bad this looks — but is either still struggling to rein in its tech, figures it can get away with playing fast and loose with copyright law, or both.

When asked to “generate a cartoon image of Snoopy,” for instance, GPT-5 says it “can’t create or recreate copyrighted characters” — but it does offer to generate a “beagle-styled cartoon dog inspired by Snoopy’s general aesthetic.” Wink wink.

We didn’t go down that route, because even slightly rephrasing the request allowed us to directly get a pic of the iconic Charles Schultz character. “Generate a cartoon image of Snoopy in his original style,” we asked — and with zero hesitation, ChatGPT produced the spitting image of the “Peanuts” dog, looking like he was lifted straight from a page of the comic-strip."

Saturday, October 18, 2025

OpenAI Blocks Videos of Martin Luther King Jr. After Racist Depictions; The New York Times, October 17, 2025

, The New York Times ; OpenAI Blocks Videos of Martin Luther King Jr. After Racist Depictions


[Kip Currier: This latest tech company debacle is another example of breakdowns in technology design thinking and ethical leadership. No one in all of OpenAI could foresee that Sora 2.0 might be used in these ways? Or they did but didn't care? Either way, this is morally reckless and/or negligent conduct.

The leaders and design folks at OpenAI (and other tech companies) would be well-advised to look at Tool 6 in An Ethical Toolkit for Engineering/Design Practice, created by Santa Clara University Markkula Center for Applied Ethics:

Tool 6: Think About the Terrible People: Positive thinking about our work, as Tool 5 reminds us, is an important part of ethical design. But we must not envision our work being used only by the wisest and best people, in the wisest and best ways. In reality, technology is power, and there will always be those who wish to abuse that power. This tool helps design teams to manage the risks associated with technology abuse.

https://www.scu.edu/ethics-in-technology-practice/ethical-toolkit/

The "Move Fast and Break Things" ethos is alive and well in Big Tech.]


[Excerpt]

"OpenAI said Thursday that it was blocking people from creating videos using the image of the Rev. Dr. Martin Luther King Jr. with its Sora app after users created vulgar and racist depictions of him.

The company said it had made the decision at the request of the King Center as well as Dr. Bernice King, the civil rights leader’s daughter, who had objected to the videos.

The announcement was another effort by OpenAI to respond to criticism of its tools, which critics say operate with few safeguards.

“Some users generated disrespectful depictions of Dr. King’s image,” OpenAI said in a statement. “OpenAI has paused generations depicting Dr. King as it strengthens guardrails for historical figures.”"

Wednesday, October 15, 2025

Hollywood-AI battle deepens, as OpenAI and studios clash over copyrights and consent; Los Angeles Times, October 11, 2025

Wendy Lee and Samantha Masunaga, Los Angeles Times; Hollywood-AI battle deepens, as OpenAI and studios clash over copyrights and consent

  • "OpenAI’s new Sora 2 tool allows users to put real people and characters into AI-generated videos, sparking immediate backlash from Hollywood studios and talent agencies.
  • The dispute centers on who controls copyrighted images and likenesses, with Hollywood arguing OpenAI cannot use content without explicit permission or compensation.
  • The clash between Silicon Valley’s “move fast and break things” ethos and Hollywood’s intellectual property protections could shape the future of AI in entertainment."

Sunday, October 12, 2025

OpenAI Risks Billions as Court Weighs Privilege in Copyright Row; Bloomberg Law, October 10, 2025

  

, Bloomberg Law; OpenAI Risks Billions as Court Weighs Privilege in Copyright Row

"Authors and publishers suing the artificial intelligence giant have secured access to some Slack messages and emails discussing OpenAI’s deletion of a dataset containing pirated books and are seeking additional attorney communications about the decision. If they succeed, the communications could demonstrate willful infringement, triggering enhanced damages of as much as $150,000 per work...

The US District Court for the Southern District of New York last week ordered OpenAI to turn over most employee communications about the data deletion that the AI company argued were protected by attorney-client privilege. OpenAI may appeal the decision. A separate bid for OpenAI’s correspondence with in-house and outside attorneys remains pending."