Showing posts with label OpenAI. Show all posts
Showing posts with label OpenAI. Show all posts

Friday, March 28, 2025

ChatGPT's new image generator blurs copyright lines; Axios, March 28, 2025

Ina Fried, Axios; ChatGPT's new image generator blurs copyright lines

"AI image generators aren't new, but the one OpenAI handed to ChatGPT's legions of users this week is more powerful and has fewer guardrails than its predecessors — opening up a range of uses that are both tantalizing and terrifying."


Thursday, March 27, 2025

Judge allows 'New York Times' copyright case against OpenAI to go forward; NPR, March 27, 2025

, NPR ; Judge allows 'New York Times' copyright case against OpenAI to go forward

"A federal judge on Wednesday rejected OpenAI's request to toss out a copyright lawsuit from The New York Times that alleges that the tech company exploited the newspaper's content without permission or payment.

In an order allowing the lawsuit to go forward, Judge Sidney Stein, of the Southern District of New York, narrowed the scope of the lawsuit but allowed the case's main copyright infringement claims to go forward.

Stein did not immediately release an opinion but promised one would come "expeditiously."

The decision is a victory for the newspaper, which has joined forces with other publishers, including The New York Daily News and the Center for Investigative Reporting, to challenge the way that OpenAI collected vast amounts of data from the web to train its popular artificial intelligence service, ChatGPT."

Monday, March 24, 2025

Should AI be treated the same way as people are when it comes to copyright law? ; The Hill, March 24, 2025

 NICHOLAS CREEL, The Hill ; Should AI be treated the same way as people are when it comes to copyright law? 

"The New York Times’s lawsuit against OpenAI and Microsoft highlights an uncomfortable contradiction in how we view creativity and learning. While the Times accuses these companies of copyright infringement for training AI on their content, this ignores a fundamental truth: AI systems learn exactly as humans do, by absorbing, synthesizing and transforming existing knowledge into something new."

Sunday, March 16, 2025

OpenAI declares AI race “over” if training on copyrighted works isn’t fair use; Ars Technica, March 13, 2025

ASHLEY BELANGER  , Ars Technica; OpenAI declares AI race “over” if training on copyrighted works isn’t fair use

"OpenAI is hoping that Donald Trump's AI Action Plan, due out this July, will settle copyright debates by declaring AI training fair use—paving the way for AI companies' unfettered access to training data that OpenAI claims is critical to defeat China in the AI race.

Currently, courts are mulling whether AI training is fair use, as rights holders say that AI models trained on creative works threaten to replace them in markets and water down humanity's creative output overall.

OpenAI is just one AI company fighting with rights holders in several dozen lawsuits, arguing that AI transforms copyrighted works it trains on and alleging that AI outputs aren't substitutes for original works.

So far, one landmark ruling favored rights holders, with a judge declaring AI training is not fair use, as AI outputs clearly threatened to replace Thomson-Reuters' legal research firm Westlaw in the market, Wired reported. But OpenAI now appears to be looking to Trump to avoid a similar outcome in its lawsuits, including a major suit brought by The New York Times."

Saturday, February 8, 2025

OpenAI says DeepSeek ‘inappropriately’ copied ChatGPT – but it’s facing copyright claims too; The Conversation, February 4, 2025

Senior Lecturer in Natural Language Processing, The University of Melbourne, The University of Melbourne , Lecturer in Cybersecurity, The University of Melbourne, The Conversation; OpenAI says DeepSeek ‘inappropriately’ copied ChatGPT – but it’s facing copyright claims too

"Within days, DeepSeek’s app surpassed ChatGPT in new downloads and set stock prices of tech companies in the United States tumbling. It also led OpenAI to claim that its Chinese rival had effectively pilfered some of the crown jewels from OpenAI’s models to build its own. 

In a statement to the New York Times, the company said: 

We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models, and will share information as we know more. We take aggressive, proactive countermeasures to protect our technology and will continue working closely with the US government to protect the most capable models being built here.

The Conversation approached DeepSeek for comment, but it did not respond.

But even if DeepSeek copied – or, in scientific parlance, “distilled” – at least some of ChatGPT to build R1, it’s worth remembering that OpenAI also stands accused of disrespecting intellectual property while developing its models."

Saturday, January 18, 2025

News organizations sue OpenAI over copyright infringement claims; Jurist.org, January 16, 2025

 , Jurist.org; News organizations sue OpenAI over copyright infringement claims

"The case centers on allegations that OpenAI unlawfully utilized copyrighted content from various publishers, including The New York Times, to train its generative AI models and the hearing could determine whether OpenAI will face trial.

The plaintiffs claim that ChatGPT’s ability to generate human-like responses stems from the unauthorized use of their work without permission or compensation to develop their large language models (LLMs). OpenAI and its financial backer Microsoft argue that its use of data is protected under the fair use doctrine, which allows limited use of copyrighted material without permission for purposes such as commentary, criticism or education.

Additionally, OpenAI’s legal team asserts that The New York Times has not demonstrated actual harm resulting from their practices and that its use of the copyrighted material is transformative as it does not replicate the content verbatim. On the other hand, the plaintiffs are arguing copyright infringement because OpenAI removed identifiable information such as author bylines and publication details when using the content. They also contend that the LLMs absorb and reproduce expressions from the training data without genuine understanding."

Thursday, January 16, 2025

In AI copyright case, Zuckerberg turns to YouTube for his defense; TechCrunch, January 15, 2025

, TechCrunch ; In AI copyright case, Zuckerberg turns to YouTube for his defense

"Meta CEO Mark Zuckerberg appears to have used YouTube’s battle to remove pirated content to defend his own company’s use of a data set containing copyrighted e-books, reveals newly released snippets of a deposition he gave late last year.

The deposition, which was part of a complaint submitted to the court by plaintiffs’ attorneys, is related to the AI copyright case Kadrey v. Meta. It’s one of many such cases winding through the U.S. court system that’s pitting AI companies against authors and other IP holders. For the most part, the defendants in these cases – AI companies – claim that training on copyrighted content is “fair use.” Many copyright holders disagree."

Wednesday, January 15, 2025

'The New York Times' takes OpenAI to court. ChatGPT's future could be on the line; NPR, January 14, 2025

 , NPR; 'The New York Times' takes OpenAI to court. ChatGPT's future could be on the line

"A group of news organizations, led by The New York Times, took ChatGPT maker OpenAI to federal court on Tuesday in a hearing that could determine whether the tech company has to face the publishers in a high-profile copyright infringement trial.

Three publishers' lawsuits against OpenAI and its financial backer Microsoft have been merged into one case. Leading each of the three combined cases are the Times, The New York Daily News and the Center for Investigative Reporting.

Other publishers, like the Associated Press, News Corp. and Vox Media, have reached content-sharing deals with OpenAI, but the three litigants in this case are taking the opposite path: going on the offensive."

Monday, January 6, 2025

OpenAI holds off on promise to creators, fails to protect intellectual property; The American Bazaar, January 3, 2025

 Vishnu Kamal, The American Bazaar; OpenAI holds off on promise to creators, fails to protect intellectual property

"OpenAI may yet again be in hot water as it seems that the tech giant may be reneging on its earlier assurances. Reportedly, in May, OpenAI said it was developing a tool to let creators specify how they want their works to be included in—or excluded from—its AI training data. But seven months later, this feature has yet to see the light of day.

Called Media Manager, the tool would “identify copyrighted text, images, audio, and video,” OpenAI said at the time, to reflect creators’ preferences “across multiple sources.” It was intended to stave off some of the company’s fiercest critics, and potentially shield OpenAI from IP-related legal challenges...

OpenAI has faced various legal challenges related to its AI technologies and operations. One major issue involves the privacy and data usage of its language models, which are trained on large datasets that may include publicly available or copyrighted material. This raises concerns over privacy violations and intellectual property rights, especially regarding whether the data used for training was obtained with proper consent.

Additionally, there are questions about the ownership of content generated by OpenAI’s models. If an AI produces a work based on copyrighted data, it is tricky to determine who owns the rights—whether it’s OpenAI, the user who prompted the AI, or the creators of the original data.

Another concern is the liability for harmful content produced by AI. If an AI generates misleading or defamatory information, legal responsibility could fall on OpenAI."

Saturday, December 28, 2024

SDNY Order Renews Possibility of Digital Millenium Copyright Act as Legal Recourse for News Organizations in the Age of AI; The National Law Review, December 23, 2024

Dan Jasnow of ArentFox Schiff LLP  -  AI Law Blog, The National Law Review; SDNY Order Renews Possibility of Digital Millenium Copyright Act as Legal Recourse for News Organizations in the Age of AI 

"Key Takeaway: The Intercept’s Case Against OpenAI Will Clarify the Future of DMCA Protection Against AI Developers

Until now, other DMCA claims against AI developers have largely failed — most of these cases have not proceeded past the motion-to-dismiss stage — but the order allowing The Intercept’s claim to proceed renews the possibility that the DMCA may be a viable claim against AI developers. For rights holders, 1202(b) provides distinct causes of action against AI developers with different evidentiary requirements than traditional copyright infringement claims. For developers, 1202(b) is another legal risk to be managed, particularly in the wake of the order in The Intercept case."

Tuesday, December 10, 2024

OpenAI makes AI video generator Sora publicly available in US; The Guardian, December 9, 2024

  , The Guardian; OpenAI makes AI video generator Sora publicly available in US

"Anyone in the US can now use OpenAI’s artificial intelligence video generator, Sora, which the company announced on Monday would become publicly available. OpenAI first presented Sora in February, but it was only accessible to select artists, film-makers and safety testers. At multiple points on Monday, though, OpenAI’s website did not allow for new sign-ups for Sora, citing heavy traffic...

While generative AI has improved considerably over the past year, it is still prone to hallucinations, or incorrect responses, and plagiarism. AI image generators also often produce unrealistic images, such as people with several arms or misplaced facial features.

Critics warn that this type of AI video technology could be misused by bad actors for disinformation, scams and deepfakes. There have already been deepfake videos of the Ukrainian president, Volodymyr Zelenskyy, supposedly calling for a ceasefire and of Kamala Harris supposedly describing herself as “the ultimate diversity hire”.

OpenAI said in its blogpost that it would initially limit uploads of specific people and that it will block content with nudity. The company said that it was additionally “blocking particularly damaging forms of abuse, such as child sexual abuse materials and sexual deepfakes”.

Sora will be available to users who already subscribe and pay for OpenAI’s tools. People in the US and “most countries internationally” will have access to the tool, but it will not be available in the UK or Europe."

Wednesday, December 4, 2024

OpenAI Must Hand Over Execs' Social Media DMs in Copyright Suits; Bloomberg Law, December 3, 2024

 , Bloomberg Law; OpenAI Must Hand Over Execs' Social Media DMs in Copyright Suits

"OpenAI Inc. failed to convince a federal magistrate judge that California labor law precludes it from turning over executives’ social media messages to New York Times Co. and an authors group suing the company for copyright infringement."

Saturday, November 30, 2024

‘AI Jesus’ avatar tests man’s faith in machines and the divine; AP, November 28, 2024

 JAMEY KEATEN, AP; ‘AI Jesus’ avatar tests man’s faith in machines and the divine

"Researchers and religious leaders on Wednesday released findings from a two-month experiment through art in a Catholic chapel in Switzerland, where an avatar of “Jesus” on a computer screen — tucked into a confessional — took questions by visitors on faith, morality and modern-day woes, and offered responses based on Scripture...

Philipp Haslbauer, an IT specialist at the Lucerne University of Applied Sciences and Arts who pulled together the technical side of the project, said the AI responsible for taking the role of “AI Jesus” and generating responses was GPT-4o by OpenAI, and an open-source version of the company’s Whisper was used for speech comprehension. 

An AI video generator from Heygen was used to produce voice and video from a real person, he said. Haslbauer said no specific safeguards were used “because we observed GPT-4o to respond fairly well to controversial topics.”

Visitors broached many topics, including true love, the afterlife, feelings of solitude, war and suffering in the world, the existence of God, plus issues like sexual abuse cases in the Catholic Church or its position on homosexuality.

Most visitors described themselves as Christians, though agnostics, atheists, Muslims, Buddhists and Taoists took part too, according to a recap of the project released by the Catholic parish of Lucerne.

About one-third were German speakers, but “AI Jesus” — which is conversant in about 100 languages — also had conversations in languages like Chinese, English, French, Hungarian, Italian, Russian and Spanish."

Friday, November 29, 2024

Major Canadian News Outlets Sue OpenAI in New Copyright Case; The New York Times, November 29, 2024

, The New York Times ; Major Canadian News Outlets Sue OpenAI in New Copyright Case

"A coalition of Canada’s biggest news organizations is suing OpenAI, the maker of the artificial intelligence chatbot, ChatGPT, accusing the company of illegally using their content in the first case of its kind in the country.

Five of the country’s major news companies, including the publishers of its top newspapers, newswires and the national broadcaster, filed the joint suit in the Ontario Superior Court of Justice on Friday morning...

The Canadian outlets, which include the Globe and Mail, the Toronto Star and the CBC — the Canadian Broadcasting Corporation — are seeking what could add up to billions of dollars in damages. They are asking for 20,000 Canadian dollars, or $14,700, per article they claim was illegally scraped and used to train ChatGPT.

They are also seeking a share of the profits made by what they claim is OpenAI’s misuse of their content, as well as for the company to stop such practices in the future."

Thursday, November 21, 2024

OpenAI accidentally deleted potential evidence in NY Times copyright lawsuit; TechCrunch, November 20, 2024

Kyle Wiggers , TechCrunch; OpenAI accidentally deleted potential evidence in NY Times copyright lawsuit

"OpenAI tried to recover the data — and was mostly successful. However, because the folder structure and file names were “irretrievably” lost, the recovered data “cannot be used to determine where the news plaintiffs’ copied articles were used to build [OpenAI’s] models,” per the letter.

“News plaintiffs have been forced to recreate their work from scratch using significant person-hours and computer processing time,” counsel for The Times and Daily News wrote. “The news plaintiffs learned only yesterday that the recovered data is unusable and that an entire week’s worth of its experts’ and lawyers’ work must be re-done, which is why this supplemental letter is being filed today.”

The plaintiffs’ counsel makes clear that they have no reason to believe the deletion was intentional. But they do say the incident underscores that OpenAI “is in the best position to search its own datasets” for potentially infringing content using its own tools."

Wednesday, November 20, 2024

Indian news agency sues OpenAI alleging copyright infringement; TechCrunch, November 18, 2024

 Manish Singh, TechCrunch; Indian news agency sues OpenAI alleging copyright infringement

"One of India’s largest news agencies, Asian News International (ANI), has sued OpenAI in a case that could set a precedent for how AI companies use copyrighted news content in the world’s most populous nation.

Asian News International filed a 287-page lawsuit in the Delhi High Court on Monday, alleging the AI company illegally used its content to train its AI models and generated false information attributed to the news agency. The case marks the first time an Indian media organization has taken legal action against OpenAI over copyright claims.

During Tuesday’s hearing, Justice Amit Bansal issued a summons to OpenAI after the company confirmed it had already ensured that ChatGPT wasn’t accessing ANI’s website. The bench said that it was not inclined to grant an injunction order on Tuesday, as the case required a detailed hearing for being a “complex issue.”

The next hearing is scheduled to be held in January."

Saturday, November 9, 2024

OpenAI Gets a Win as Court Says No Harm Was Demonstrated in Copyright Case; Gizmodo, November 8, 2024

, Gizmodo; OpenAI Gets a Win as Court Says No Harm Was Demonstrated in Copyright Case

"OpenAI won an initial victory on Thursday in one of the many lawsuits the company is facing for its unlicensed use of copyrighted material to train generative AI products like ChatGPT.

A federal judge in the southern district of New York dismissed a complaint brought by the media outlets Raw Story and AlterNet, which claimed that OpenAI violated copyright law by purposefully removing what is known as copyright management information, such as article titles and author names, from material that it incorporated into its training datasets.

OpenAI had filed a motion to dismiss the case, arguing that the plaintiffs did not have standing to sue because they had not demonstrated a concrete harm to their businesses caused by the removal of the copyright management information. Judge Colleen McMahon agreed, dismissing the lawsuit but leaving the door open for the plaintiffs to file an amended complaint."

Wednesday, October 23, 2024

Former OpenAI Researcher Says the Company Broke Copyright Law; The New York Times, October 23, 2024

 , The New York Times; Former OpenAI Researcher Says the Company Broke Copyright Law

"Mr. Balaji believes the threats are more immediate. ChatGPT and other chatbots, he said, are destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train these A.I. systems.

“This is not a sustainable model for the internet ecosystem as a whole,” he told The Times."

Tuesday, October 15, 2024

AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members; Business Wire, October 15, 2024

Business Wire; AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members

"The AI Ethics Council, founded by OpenAI CEO Sam Altman and Operation HOPE CEO John Hope Bryant, announced today that Reid Hoffman (Co-Founder of LinkedIn and Inflection AI and Partner at Greylock) and Van Jones (CNN commentator, Dream Machine Founder and New York Times best-selling author) have joined as a members. Formed in December 2023, the Council brings together an interdisciplinary body of diverse experts including civil rights activists, HBCU presidents, technology and business leaders, clergy, government officials and ethicists to collaborate and set guidelines on ways to ensure that traditionally underrepresented communities have a voice in the evolution of artificial intelligence and to help frame the human and ethical considerations around the technology. Ultimately, the Council also seeks to help determine how AI can be harnessed to create vast economic opportunities, especially for the underserved.

Mr. Hoffman and Mr. Jones join an esteemed group on the Council, which will serve as a leading authority in identifying, advising on and addressing ethical issues related to AI. In addition to Mr. Altman and Mr. Bryant, founding AI Ethics Council members include:

Friday, October 11, 2024

Why The New York Times' lawyers are inspecting OpenAI's code in a secretive room; Business Insider, October 10, 2024

  , Business Insider; Why The New York Times' lawyers are inspecting OpenAI's code in a secretive room

"OpenAI is worth $157 billion largely because of the success of ChatGPT. But to build the chatbot, the company trained its models on vast quantities of text it didn't pay a penny for.

That text includes stories from The New York Times, articles from other publications, and an untold number of copyrighted books.

The examination of the code for ChatGPT, as well as for Microsoft's artificial intelligence models built using OpenAI's technology, is crucial for the copyright infringement lawsuits against the two companies.

Publishers and artists have filed about two dozen major copyright lawsuits against generative AI companies. They are out for blood, demanding a slice of the economic pie that made OpenAI the dominant player in the industry and which pushed Microsoft's valuation beyond $3 trillion. Judges deciding those cases may carve out the legal parameters for how large language models are trained in the US."