Showing posts with label fair use. Show all posts
Showing posts with label fair use. Show all posts

Thursday, April 17, 2025

Creators Are Losing the AI Copyright Battle. We Have to Keep Fighting (Guest Column); The Hollywood Reporter, April 16, 2025

Ed Newton-Rex ; Creators Are Losing the AI Copyright Battle. We Have to Keep Fighting (Guest Column)

"The struggle between AI companies and creatives around “training data” — or what you and I would refer to as people’s life’s work — may be the defining struggle of this generation for the media industries. AI companies want to exploit creators’ work without paying them, using it to train AI models that compete with those creators; creators and rights holders are doing everything they can to stop them."

Wednesday, April 16, 2025

The real argument artists should be making against AI; Vox, April 16, 2025

 Sigal Samuel, Vox; The real argument artists should be making against AI

[Paywall to access]

Why Musk and Dorsey want to ‘delete all IP law’; The Washington Post, April 15, 2025

Analysis by 
 and 
with research by 
 , The Washington Post; Why Musk and Dorsey want to ‘delete all IP law’

"Jack Dorsey, the co-founder of Twitter and CEO of Square, posted a cryptic and drastic demand on Elon Musk’s X over the weekend: “delete all IP law.” The post drew a quick reply from Mr. X himself: “I agree.”

Musk’s laconic response amplified Dorsey’s post to his 220 million followers and sparked a debate that drew in a cast of characters including Epic Games CEO Tim Sweeney, tech lawyer and former vice presidential candidate Nicole Shanahan, novelist Walter Kirn, evolutionary psychologist Geoffrey Miller and the technologist and early Twitter developer Evan Henshaw-Plath, a.k.a. Rabble, among others...

Serious policy idea or not, the concord between Dorsey and Musk highlights how the debate over AI and copyright law is coming to a head in Silicon Valley.

How it’s resolved will have major ramifications for the tech companies, creative people and their livelihoods and the overall AI race."

Sunday, April 13, 2025

Law professors side with authors battling Meta in AI copyright case; TechCrunch, April 11, 2025

Kyle Wiggers , TechCrunch; Law professors side with authors battling Meta in AI copyright case

"A group of professors specializing in copyright law has filed an amicus brief in support of authors suing Meta for allegedly training its Llama AI models on e-books without permission.

The brief, filed on Friday in the U.S. District Court for the Northern District of California, San Francisco Division, calls Meta’s fair use defense “a breathtaking request for greater legal privileges than courts have ever granted human authors.”"

Wednesday, April 2, 2025

EFF Urges Third Circuit to Join the Legal Chorus: No One Owns the Law; Electronic Frontier Foundation (EFF), March 31, 2025

 CORYNNE MCSHERRY, Electronic Frontier Foundation (EFF); EFF Urges Third Circuit to Join the Legal Chorus: No One Owns the Law

"This case concerns UpCodes, a company that has created a database of building codes—like the National Electrical Code—that includes codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law, and therefore has the right to control how the public accesses and shares them. Fortunately, neither the Constitution nor the Copyright Act support that theory. Faced with similar claims, some courts, including the Fifth Circuit Court of Appeals, have held that the codes lose copyright protection when they are incorporated into law. Others, like the D.C. Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that, whether or not the legal status of the standards changes once they are incorporated into law, making them fully accessible and usable online is a lawful fair use. A federal court in Pennsylvania followed the latter path in this case, finding that UpCodes’ database was a protected fair use."

Thursday, March 27, 2025

Judge allows 'New York Times' copyright case against OpenAI to go forward; NPR, March 27, 2025

, NPR ; Judge allows 'New York Times' copyright case against OpenAI to go forward

"A federal judge on Wednesday rejected OpenAI's request to toss out a copyright lawsuit from The New York Times that alleges that the tech company exploited the newspaper's content without permission or payment.

In an order allowing the lawsuit to go forward, Judge Sidney Stein, of the Southern District of New York, narrowed the scope of the lawsuit but allowed the case's main copyright infringement claims to go forward.

Stein did not immediately release an opinion but promised one would come "expeditiously."

The decision is a victory for the newspaper, which has joined forces with other publishers, including The New York Daily News and the Center for Investigative Reporting, to challenge the way that OpenAI collected vast amounts of data from the web to train its popular artificial intelligence service, ChatGPT."

Monday, March 24, 2025

How to tell when AI models infringe copyright; The Washington Post, March 24, 2024

, The Washington Post; How to tell when AI models infringe copyright

"Fair use has been a big part of AI companies’ defense. No matter how well a plaintiff manages to argue that a given AI model infringes copyright, the AI maker can usually point to the doctrine of fair use, which requires consideration of multiple factors, including the purpose of the use (here, criticism, comment and research are favored) and the effect of the use on the marketplace. If, in using a copied work, an AI model adds “something new,” it is probably in the clear."

Should AI be treated the same way as people are when it comes to copyright law? ; The Hill, March 24, 2025

 NICHOLAS CREEL, The Hill ; Should AI be treated the same way as people are when it comes to copyright law? 

"The New York Times’s lawsuit against OpenAI and Microsoft highlights an uncomfortable contradiction in how we view creativity and learning. While the Times accuses these companies of copyright infringement for training AI on their content, this ignores a fundamental truth: AI systems learn exactly as humans do, by absorbing, synthesizing and transforming existing knowledge into something new."

Sunday, March 16, 2025

The AI Copyright Battle: Why OpenAI And Google Are Pushing For Fair Use; Forbes, March 15, 2025

Virginie Berger , Forbes; The AI Copyright Battle: Why OpenAI And Google Are Pushing For Fair Use

"Furthermore, the ongoing lawsuits against AI firms could serve as a necessary correction to push the industry toward genuinely intelligent machine learning models instead of data-compression-based generators masquerading as intelligence. If legal challenges force AI firms to rethink their reliance on copyrighted content, it could spur innovation toward creating more advanced, ethically sourced AI systems...

Recommendations: Finding a Sustainable Balance

A sustainable solution must reconcile technological innovation with creators' economic interests. Policymakers should develop clear federal standards specifying fair use parameters for AI training, considering solutions such as:

  • Licensing and Royalties: Transparent licensing arrangements compensating creators whose work is integral to AI datasets.
  • Curated Datasets: Government or industry-managed datasets explicitly approved for AI training, ensuring fair compensation.
  • Regulated Exceptions: Clear legal definitions distinguishing transformative use in AI training contexts.

These nuanced policies could encourage innovation without sacrificing creators’ rights.

The lobbying by OpenAI and Google reveals broader tensions between rapid technological growth and ethical accountability. While national security concerns warrant careful consideration, they must not justify irresponsible regulation or ethical compromises. A balanced approach, preserving innovation, protecting creators’ rights, and ensuring sustainable and ethical AI development, is critical for future global competitiveness and societal fairness."

OpenAI declares AI race “over” if training on copyrighted works isn’t fair use; Ars Technica, March 13, 2025

ASHLEY BELANGER  , Ars Technica; OpenAI declares AI race “over” if training on copyrighted works isn’t fair use

"OpenAI is hoping that Donald Trump's AI Action Plan, due out this July, will settle copyright debates by declaring AI training fair use—paving the way for AI companies' unfettered access to training data that OpenAI claims is critical to defeat China in the AI race.

Currently, courts are mulling whether AI training is fair use, as rights holders say that AI models trained on creative works threaten to replace them in markets and water down humanity's creative output overall.

OpenAI is just one AI company fighting with rights holders in several dozen lawsuits, arguing that AI transforms copyrighted works it trains on and alleging that AI outputs aren't substitutes for original works.

So far, one landmark ruling favored rights holders, with a judge declaring AI training is not fair use, as AI outputs clearly threatened to replace Thomson-Reuters' legal research firm Westlaw in the market, Wired reported. But OpenAI now appears to be looking to Trump to avoid a similar outcome in its lawsuits, including a major suit brought by The New York Times."

Saturday, March 8, 2025

Hell is Clearing Permissions: Looking for Lifelines and Deliverance [5,000th post since this blog started in 2008]; IP, AI & OM, March 8, 2025

Kip Currier, IP, AI & OM; Hell is Clearing Permissions: Looking for Lifelines and Deliverance [5,000th post since this blog started in 2008]


Hell is Clearing Permissions: Looking for Lifelines and Deliverance

French Existentialist Jean-Paul Sartre famously opined "L'enfer, c'est les autres (Hell is other people). This post won't be weighing in on the nuances of that declaration by a character in his 1944 play Huit Clos (No Exit) -- although candidates who could easily qualify as diabolic "other people" may spring to mind for you too.

However, thinking about an array of challenging experiences I've had while working on clearing permissions for the use of images and textual material in my forthcoming textbook, Ethics, Information, and Technology, I thought of Sartre's grim observation, with a twist: Hell is clearing permissions.

I've been teaching a Copyright and Fair Use course since 2009, which expanded into an IP and Open Movements around 2015, so I'm neither new to copyright law and fair use issues nor unfamiliar with clearing permissions to use images. Graduate students in the course read Kembrew McLeod's Freedom of Expression and Pat Aufderheide and Peter Jaszi's Reclaiming Fair Use: How To Put Balance Back In Copyright, both of which deal with "permissions culture". I also have my students get familiar with permissions issues via a free comic book, Bound By Law?, composed by Duke Law School's Center for the Study of the Public Domain. The "Bound By Law?" authors, Keith Aoki, James Boyle, and Jennifer Jenkins, chronicle real-life travails faced by creators lawfully trying to exercise fair use while creating new works and balancing licensing costs. One of my favorite examples in the book is the documentary film makers who happen to capture images from an episode of The Simpsons displayed on a TV set while filming what goes on in the backstage lives of stagehands working on The Wagner Ring Cycle opera.

Yet, despite fairly significant copyright and fair use knowledge, as well as frequently participating in copyright webinars and trainings, this is the first time I've worked on clearing permissions for a book of my own. The experiences have been eye-opening to say the least. Two insights and "needs" continue to jump out at me: 

  • (1) the need for more responsive, user-friendly, and expedient ways to clear permissions, and 
  • (2) the need for more accessible and readily understandable information sources to aid authors in the do's and don'ts of clearing permissions.

I do need to acknowledge the many contributions that copyright and fair use scholars Pat Aufderheide and Peter Jaszi, mentioned above, have made in bringing together collaborative groups that have created "Best Practices" primers for a number of areas, such as their 2012 Code of Best Practices in Fair Use for Academic and Research Libraries.

Much more can be done, though, to help newer authors and creators, as well as seasoned pros, to navigate hurdles and potential pitfalls of securing permission to use images. Information professionals -- librarians and other staff within libraries, archives, and museums, for example -- are well-equipped and positioned to use their unique skill sets to help creators to successfully maneuver through clearing permissions-related "obstacle courses".

In future posts, I'll share insights, lessons learned, and tips on mitigating "hellish" experiences and moving from uncertain "limbo" to more clarity on image permissions.

Thursday, February 27, 2025

An AI Maker Was Just Found Liable for Copyright Infringement. What Does This Portend for Content Creators and AI Makers?; The Federalist Society, February 25, 2025

  , The Federalist Society; An AI Maker Was Just Found Liable for Copyright Infringement. What Does This Portend for Content Creators and AI Makers?

"In a case decided on February 11, the makers of generative AI (GenAI), such as ChatGPT, lost the first legal battle in the war over whether they commit copyright infringement by using the material of others as training data without permission. The case is called Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc.

If other courts follow this ruling, the cost of building and selling GenAI services will dramatically increase. Such businesses are already losing money.

The ruling could also empower content creators, such as writers, to deny the use of their material to train GenAIs or to demand license fees. Some creators might be unwilling to license use of their material for training AIs due to fear that GenAI will destroy demand for their work."

Sunday, February 16, 2025

Court filings show Meta paused efforts to license books for AI training; TechCrunch, February 14, 3025

Kyle Wiggers, TechCrunch; Court filings show Meta paused efforts to license books for AI training

"According to one transcript, Sy Choudhury, who leads Meta’s AI partnership initiatives, said that Meta’s outreach to various publishers was met with “very slow uptake in engagement and interest.”

“I don’t recall the entire list, but I remember we had made a long list from initially scouring the Internet of top publishers, et cetera,” Choudhury said, per the transcript, “and we didn’t get contact and feedback from — from a lot of our cold call outreaches to try to establish contact.”

Choudhury added, “There were a few, like, that did, you know, engage, but not many.”

According to the court transcripts, Meta paused certain AI-related book licensing efforts in early April 2023 after encountering “timing” and other logistical setbacks. Choudhury said some publishers, in particular fiction book publishers, turned out to not in fact have the rights to the content that Meta was considering licensing, per a transcript.

“I’d like to point out that the — in the fiction category, we quickly learned from the business development team that most of the publishers we were talking to, they themselves were representing that they did not have, actually, the rights to license the data to us,” Choudhury said. “And so it would take a long time to engage with all their authors.”"

Wednesday, February 12, 2025

Court: Training AI Model Based on Copyrighted Data Is Not Fair Use as a Matter of Law; The National Law Review, February 11, 2025

Joseph A. MeckesJoseph Grasser of Squire Patton Boggs (US) LLP   - Global IP and Technology Law Blog,  The National Law Review; Court: Training AI Model Based on Copyrighted Data Is Not Fair Use as a Matter of Law

"In what may turn out to be an influential decision, Judge Stephanos Bibas ruled as a matter of law in Thompson Reuters v. Ross Intelligence that creating short summaries of law to train Ross Intelligence’s artificial intelligence legal research application not only infringes Thompson Reuters’ copyrights as a matter of law but that the copying is not fair use. Judge Bibas had previously ruled that infringement and fair use were issues for the jury but changed his mind: “A smart man knows when he is right; a wise man knows when he is wrong.”

At issue in the case was whether Ross Intelligence directly infringed Thompson Reuters’ copyrights in its case law headnotes that are organized by Westlaw’s proprietary Key Number system. Thompson Reuters contended that Ross Intelligence’s contractor copied those headnotes to create “Bulk Memos.” Ross Intelligence used the Bulk Memos to train its competitive AI-powered legal research tool. Judge Bibas ruled that (i) the West headnotes were sufficiently original and creative to be copyrightable, and (ii) some of the Bulk Memos used by Ross were so similar that they infringed as a matter of law...

In other words, even if a work is selected entirely from the public domain, the simple act of selection is enough to give rise to copyright protection."

Tuesday, January 28, 2025

It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy; Electronic Frontier Foundation (EFF), January 27, 2025

KATHARINE TRENDACOSTA, Electronic Frontier Foundation (EFF); It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy

"We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation 

We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are: 

  • Monday: Copyright Policy Should Be Made in the Open With Input From Everyone: Copyright is not a niche concern. It affects everyone’s experience online, therefore laws and policy should be made in the open and with users’ concerns represented and taken into account. 
  • Tuesday: Copyright Enforcement as a Tool of Censorship: Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.  
  • Wednesday: Device and Digital Ownership: As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.  
  • Thursday: The Preservation and Sharing of Information and Culture:Copyright often blocks the preservation and sharing of information and culture, traditionally in the public interest. Copyright law and policy should encourage and not discourage the saving and sharing of information. 
  • Friday: Free Expression and Fair Use: Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.  

Every day this week, we’ll be sharing links to blog posts on these topics at https://www.eff.org/copyrightweek." 

Thursday, January 23, 2025

Rock & Roll Hall of Fame Aims to Axe Copyright Lawsuit Over Van Halen Guitar Photo; Billboard, January 22, 2025

BILL DONAHUE , Billboard; Rock & Roll Hall of Fame Aims to Axe Copyright Lawsuit Over Van Halen Guitar Photo

"The Rock Hall is just the latest company to face such a lawsuit from Zlozower, who also snapped images of Led Zeppelin, The Rolling Stones, Michael Jackson and Bruce Springsteen over a decades-long career. Since 2016, court records show he’s filed nearly 60 copyright cases against a range of defendants over images of Elvis Costello, Guns N’ Roses, Mötley Crüe and more...

In their motion to dismiss the case, the Rock Hall’s attorneys say the museum made a “transformative use” of Zlozower’s original image — a key question when courts decide fair use. They say the Hall used it not simply as an image of the band, but “to contextualize Eddie Van Halen’s instruments on display in the museum as historical artifacts.”

“RRHOF incorporated a portion of plaintiff’s photograph displayed next to the exhibition object, as one piece of source material to document and represent the use of the guitar,” the museum’s lawyers write. “This proximal association between source material and exhibition object helps visitors connect information and delve more deeply into the exhibition objects.”

In making that argument, the Hall’s attorneys had a handy piece of legal precedent to cite: A 2021 ruling by a federal appeals court tossed out a copyright lawsuit against New York City’s Metropolitan Museum of Art over the use of another image of Van Halen in a different exhibit on the same famous set of guitars."

Sunday, January 19, 2025

Congress Must Change Copyright Law for AI | Opinion; Newsweek, January 16, 2025

 Assistant Professor of Business Law, Georgia College and State University , Newsweek; Congress Must Change Copyright Law for AI | Opinion

"Luckily, the Constitution points the way forward. In Article I, Section 8, Congress is explicitly empowered "to promote the Progress of Science" through copyright law. That is to say, the power to create copyrights isn't just about protecting content creators, it's also about advancing human knowledge and innovation.

When the Founders gave Congress this power, they couldn't have imagined artificial intelligence, but they clearly understood that intellectual property laws would need to evolve to promote scientific progress. Congress therefore not only has the authority to adapt copyright law for the AI age, it has the duty to ensure our intellectual property framework promotes rather than hinders technological progress.

Consider what's at risk with inaction...

While American companies are struggling with copyright constraints, China is racing ahead with AI development, unencumbered by such concerns. The Chinese Communist Party has made it clear that they view AI supremacy as a key strategic goal, and they're not going to let intellectual property rights stand in their way.

The choice before us is clear, we can either reform our copyright laws to enable responsible AI development at home or we can watch as the future of AI is shaped by authoritarian powers abroad. The cost of inaction isn't just measured in lost innovation or economic opportunity, it is measured in our diminishing ability to ensure AI develops in alignment with democratic values and a respect for human rights.

The ideal solution here isn't to abandon copyright protection entirely, but to craft a careful exemption for AI training. This could even include provisions for compensating content creators through a mandated licensing framework or revenue-sharing system, ensuring that AI companies can access the data they need while creators can still benefit from and be credited for their work's use in training these models.

Critics will argue that this represents a taking from creators for the benefit of tech companies, but this misses the broader picture. The benefits of AI development flow not just to tech companies but to society as a whole. We should recognize that allowing AI models to learn from human knowledge serves a crucial public good, one we're at risk of losing if Congress doesn't act."

Saturday, January 18, 2025

News organizations sue OpenAI over copyright infringement claims; Jurist.org, January 16, 2025

 , Jurist.org; News organizations sue OpenAI over copyright infringement claims

"The case centers on allegations that OpenAI unlawfully utilized copyrighted content from various publishers, including The New York Times, to train its generative AI models and the hearing could determine whether OpenAI will face trial.

The plaintiffs claim that ChatGPT’s ability to generate human-like responses stems from the unauthorized use of their work without permission or compensation to develop their large language models (LLMs). OpenAI and its financial backer Microsoft argue that its use of data is protected under the fair use doctrine, which allows limited use of copyrighted material without permission for purposes such as commentary, criticism or education.

Additionally, OpenAI’s legal team asserts that The New York Times has not demonstrated actual harm resulting from their practices and that its use of the copyrighted material is transformative as it does not replicate the content verbatim. On the other hand, the plaintiffs are arguing copyright infringement because OpenAI removed identifiable information such as author bylines and publication details when using the content. They also contend that the LLMs absorb and reproduce expressions from the training data without genuine understanding."

Thursday, January 16, 2025

In AI copyright case, Zuckerberg turns to YouTube for his defense; TechCrunch, January 15, 2025

, TechCrunch ; In AI copyright case, Zuckerberg turns to YouTube for his defense

"Meta CEO Mark Zuckerberg appears to have used YouTube’s battle to remove pirated content to defend his own company’s use of a data set containing copyrighted e-books, reveals newly released snippets of a deposition he gave late last year.

The deposition, which was part of a complaint submitted to the court by plaintiffs’ attorneys, is related to the AI copyright case Kadrey v. Meta. It’s one of many such cases winding through the U.S. court system that’s pitting AI companies against authors and other IP holders. For the most part, the defendants in these cases – AI companies – claim that training on copyrighted content is “fair use.” Many copyright holders disagree."

Wednesday, January 15, 2025

'The New York Times' takes OpenAI to court. ChatGPT's future could be on the line; NPR, January 14, 2025

 , NPR; 'The New York Times' takes OpenAI to court. ChatGPT's future could be on the line

"A group of news organizations, led by The New York Times, took ChatGPT maker OpenAI to federal court on Tuesday in a hearing that could determine whether the tech company has to face the publishers in a high-profile copyright infringement trial.

Three publishers' lawsuits against OpenAI and its financial backer Microsoft have been merged into one case. Leading each of the three combined cases are the Times, The New York Daily News and the Center for Investigative Reporting.

Other publishers, like the Associated Press, News Corp. and Vox Media, have reached content-sharing deals with OpenAI, but the three litigants in this case are taking the opposite path: going on the offensive."