Wednesday, March 26, 2025

Doctors Told Him He Was Going to Die. Then A.I. Saved His Life.; The New York Times, March 20, 2025

 , The New York Times ; Doctors Told Him He Was Going to Die. Then A.I. Saved His Life.

"In labs around the world, scientists are using A.I. to search among existing medicines for treatments that work for rare diseases. Drug repurposing, as it’s called, is not new, but the use of machine learning is speeding up the process — and could expand the treatment possibilities for people with rare diseases and few options.

Thanks to versions of the technology developed by Dr. Fajgenbaum’s team at the University of Pennsylvania and elsewhere, drugs are being quickly repurposed for conditions including rare and aggressive cancers, fatal inflammatory disorders and complex neurological conditions. And often, they’re working."

Richard Osman urges writers to ‘have a good go’ at Meta over breaches of copyright; The Guardian, March 25, 2025

 , The Guardian; Richard Osman urges writers to ‘have a good go’ at Meta over breaches of copyright

"Richard Osman has said that writers will “have a good go” at taking on Meta after it emerged that the company used a notorious database believed to contain pirated books to train artificial intelligence.

“Copyright law is not complicated at all,” the author of The Thursday Murder Club series wrote in a statement on X on Sunday evening. “If you want to use an author’s work you need to ask for permission. If you use it without permission you’re breaking the law. It’s so simple.”

In January, it emerged that Mark Zuckerberg approved his company’s use of The Library Genesis dataset, a “shadow library” that originated in Russia and contains more than 7.5m books. In 2024 a New York federal court ordered LibGen’s anonymous operators to pay a group of publishers $30m (£24m) in damages for copyright infringement. Last week, the Atlantic republished a searchable database of the titles contained in LibGen. In response, authors and writers’ organisations have rallied against Meta’s use of copyrighted works."

Search LibGen, the Pirated-Books Database That Meta Used to Train AI; The Atlantic, March 20, 2025

Alex Reisner , The Atlantic; Search LibGen, the Pirated-Books Database That Meta Used to Train AI

"Editor’s note: This search tool is part of The Atlantic’s investigation into the Library Genesis data set. You can read an analysis about LibGen and its contents here. Find The Atlantic’s search tool for movie and television writing used to train AI here."

Anthropic wins early round in music publishers' AI copyright case; Reuters, March 26, 2025

 , Reuters; Anthropic wins early round in music publishers' AI copyright case

"Artificial intelligence company Anthropic convinced a California federal judge on Tuesday to reject a preliminary bid to block it from using lyrics owned by Universal Music Group and other music publishers to train its AI-powered chatbot Claude.

U.S. District Judge Eumi Lee said that the publishers' request was too broad and that they failed to show Anthropic's conduct caused them "irreparable harm."

Tuesday, March 25, 2025

Ben Stiller, Mark Ruffalo and More Than 400 Hollywood Names Urge Trump to Not Let AI Companies ‘Exploit’ Copyrighted Works; Variety, March 17, 2025

Todd Spangler , Variety; Ben Stiller, Mark Ruffalo and More Than 400 Hollywood Names Urge Trump to Not Let AI Companies ‘Exploit’ Copyrighted Works

"More than 400 Hollywood creative leaders signed an open letter to the Trump White House’s Office of Science and Technology Policy, urging the administration to not roll back copyright protections at the behest of AI companies.

The filmmakers, writers, actors, musicians and others — which included Ben Stiller, Mark Ruffalo, Cynthia Erivo, Cate Blanchett, Cord Jefferson, Paul McCartney, Ron Howard and Taika Waititi — were submitting comments for the Trump administration’s U.S. AI Action Plan⁠. The letter specifically was penned in response to recent submissions to the Office of Science and Technology Policy from OpenAI and Google, which asserted that U.S. copyright law allows (or should allow) allow AI companies to train their system on copyrighted works without obtaining permission from (or compensating) rights holders."

Monday, March 24, 2025

How to tell when AI models infringe copyright; The Washington Post, March 24, 2024

, The Washington Post; How to tell when AI models infringe copyright

"Fair use has been a big part of AI companies’ defense. No matter how well a plaintiff manages to argue that a given AI model infringes copyright, the AI maker can usually point to the doctrine of fair use, which requires consideration of multiple factors, including the purpose of the use (here, criticism, comment and research are favored) and the effect of the use on the marketplace. If, in using a copied work, an AI model adds “something new,” it is probably in the clear."

Should AI be treated the same way as people are when it comes to copyright law? ; The Hill, March 24, 2025

 NICHOLAS CREEL, The Hill ; Should AI be treated the same way as people are when it comes to copyright law? 

"The New York Times’s lawsuit against OpenAI and Microsoft highlights an uncomfortable contradiction in how we view creativity and learning. While the Times accuses these companies of copyright infringement for training AI on their content, this ignores a fundamental truth: AI systems learn exactly as humans do, by absorbing, synthesizing and transforming existing knowledge into something new."

The Perils of ‘Free’ Information; Cato Institute, Spring 2025

 Jonathan M. Barnett, Cato Institute; The Perils of ‘Free’ Information

"Everyone likes free stuff. But weak IP rights distort innovation ecosystems over the longer term and, in biopharmaceutical markets, would likely induce significant capital flight to other investment opportunities. Author and entrepreneur Stewart Brand, who coined the slogan “information wants to be free,” also observed in the same comments that “information wants to be expensive.” That second quote is critical.

Absent meaningful property rights, stand-alone innovators and creators have limited ability to capture economic value that reflects their contribution to the knowledge ecosystem. This raises the risk of the content and tech pipeline running dry or innovation being confined to a handful of “walled gardens” comprised of integrated networks of products and services.

Far from being a monopoly that suppresses competition, secure IP rights are often a precondition for sustaining the innovators and artists that drive knowledge ecosystems. When information is free, society can pay a high price."

Friday, March 21, 2025

AI firms push to use copyrighted content freely; Axios, March 20, 2025

 Ina Fried, Axios; AI firms push to use copyrighted content freely

"A sharp divide over AI engines' free use of copyrighted material has emerged as a key conflict among the firms and groups that recently flooded the White House with advice on its forthcoming "AI Action Plan."

Why it matters: Copyright infringement claims were among the first legal challenges following ChatGPT's launch, with multiple lawsuits now winding their way through the courts.

Driving the news: In their White House memos, OpenAI and Google argue that their  use of copyrighted material for AI is a matter of national security — and if that use is limited, China will gain an unfair edge in the AI race."

Wednesday, March 19, 2025

Hollywood creatives urge government to defend copyright laws against AI; Los Angeles Times, March 18, 2025

Wendy Lee , Los Angeles Times; Hollywood creatives urge government to defend copyright laws against AI

"More than 400 Hollywood creatives, including director Guillermo del Toro and actors Cynthia Erivo and Joseph Gordon-Levitt, are urging the U.S. government to uphold existing copyright protections against artificial intelligence. 

“We firmly believe that America’s global AI leadership must not come at the expense of our essential creative industries,” they wrote in a letter to the White House Office of Science and Technology Policy last week.

“There is no reason to weaken or eliminate the copyright protections that have helped America flourish,” the letter said. “Not when AI companies can use our copyrighted material by simply doing what the law requires: negotiating appropriate licenses with copyright holders — just as every other industry does.”"

DC Circuit rules AI-generated work ineligible for copyright; Courthouse News Service, March 18, 2025

, Courthouse News Service; DC Circuit rules AI-generated work ineligible for copyright

"In a landmark opinion over the copyrightability of works created by artificial intelligence, a D.C. Circuit panel ruled on Tuesday that human authorship is required for copyright protection.

As AI technology quickly advances and intertwines with human creations, the unanimous opinion lays down the first precedential marker over who or what is the author of work created solely by artificial intelligence under copyright law.

The case stems from Dr. Stephen Thaler, a computer scientist who creates and works with artificial intelligence systems and created a generative artificial intelligence named the “Creativity Machine.”"

Sunday, March 16, 2025

The AI Copyright Battle: Why OpenAI And Google Are Pushing For Fair Use; Forbes, March 15, 2025

Virginie Berger , Forbes; The AI Copyright Battle: Why OpenAI And Google Are Pushing For Fair Use

"Furthermore, the ongoing lawsuits against AI firms could serve as a necessary correction to push the industry toward genuinely intelligent machine learning models instead of data-compression-based generators masquerading as intelligence. If legal challenges force AI firms to rethink their reliance on copyrighted content, it could spur innovation toward creating more advanced, ethically sourced AI systems...

Recommendations: Finding a Sustainable Balance

A sustainable solution must reconcile technological innovation with creators' economic interests. Policymakers should develop clear federal standards specifying fair use parameters for AI training, considering solutions such as:

  • Licensing and Royalties: Transparent licensing arrangements compensating creators whose work is integral to AI datasets.
  • Curated Datasets: Government or industry-managed datasets explicitly approved for AI training, ensuring fair compensation.
  • Regulated Exceptions: Clear legal definitions distinguishing transformative use in AI training contexts.

These nuanced policies could encourage innovation without sacrificing creators’ rights.

The lobbying by OpenAI and Google reveals broader tensions between rapid technological growth and ethical accountability. While national security concerns warrant careful consideration, they must not justify irresponsible regulation or ethical compromises. A balanced approach, preserving innovation, protecting creators’ rights, and ensuring sustainable and ethical AI development, is critical for future global competitiveness and societal fairness."

OpenAI declares AI race “over” if training on copyrighted works isn’t fair use; Ars Technica, March 13, 2025

ASHLEY BELANGER  , Ars Technica; OpenAI declares AI race “over” if training on copyrighted works isn’t fair use

"OpenAI is hoping that Donald Trump's AI Action Plan, due out this July, will settle copyright debates by declaring AI training fair use—paving the way for AI companies' unfettered access to training data that OpenAI claims is critical to defeat China in the AI race.

Currently, courts are mulling whether AI training is fair use, as rights holders say that AI models trained on creative works threaten to replace them in markets and water down humanity's creative output overall.

OpenAI is just one AI company fighting with rights holders in several dozen lawsuits, arguing that AI transforms copyrighted works it trains on and alleging that AI outputs aren't substitutes for original works.

So far, one landmark ruling favored rights holders, with a judge declaring AI training is not fair use, as AI outputs clearly threatened to replace Thomson-Reuters' legal research firm Westlaw in the market, Wired reported. But OpenAI now appears to be looking to Trump to avoid a similar outcome in its lawsuits, including a major suit brought by The New York Times."

Friday, March 14, 2025

French publishers and authors sue Meta over copyright works used in AI training; AP, March 12, 2025

 KELVIN CHAN, AP; French publishers and authors sue Meta over copyright works used in AI training

"French publishers and authors said Wednesday they’re taking Meta to court, accusing the social media company of using their works without permission to train its artificial intelligence model. 

Three trade groups said they were launching legal action against Meta in a Paris court over what they said was the company’s “massive use of copyrighted works without authorization” to train its generative AI model. 

The National Publishing Union, which represents book publishers, has noted that “numerous works” from its members are turning up in Meta’s data pool, the group’s president, Vincent Montagne, said in a joint statement."

Wednesday, March 12, 2025

The Copyright Office takes on the sticky issue of artificial intelligence; Federal News Network, March 11, 2025

 Tom Temin, Federal News Network; The Copyright Office takes on the sticky issue of artificial intelligence

"Artificial intelligence raises storms of questions in every domain it touches. Chief among them, copyright questions. Now the U.S. Copyright Office, a congressional agency, has completed the second of two studies of AI and copyrights. This one deals with whether you can copyright outputs created using AI. Emily Chapuis, the Copyright Office’s deputy general counsel, joined the Federal Drive with Tom Temin to discuss...

Emily Chapuis: Yeah. That’s right. So we don’t recommend in the report that Congress take any action. And the reason for this is we think that copyright law is sufficiently flexible to deal with changes in technology. And that’s not just based on AI, but on the entire history of copyright law, has had to deal with these questions, whether it’s the development of the camera or the internet. The questions about copyright ability are always on a case-by-case basis. And the technology that’s used and how it’s used and what it’s used for are important elements of that. But the sort of defining legal principles aren’t different in this context than in those other ones.

Tom Temin: Right. So the human input idea then is kind of an eternal for copyright. How do you decide that? Is it a percentage of human input? Because the machine does a lot here. But you could say, ‘Well, the camera did a lot when it opened and closed the shutter and exposed silver halide. And then there was a machine process to produce that image. But it was the selection, the timing, the decisive moment.’ To quote Henri Cartier-Bresson, another French photographer. That’s really the issue here. The human input and not the machine input.

Emily Chapuis: Yeah, that’s right. And it’s hard to parse. I mean, we’ve had people ask, so what’s the percentage that has to be human created? And there’s not a clear answer to that, again, because it’s case by case. But also the question isn’t really amount as much as it is control. So who’s controlling the expression. And so one of the things that we try to explain is that even the same technology can be used in a variety of different ways. So you can use generative AI technology as a tool assistive to enhance the human expression or you can use it as a substitute for human expression. And so control is sort of the bottom line in terms of what we’re looking at to draw that distinction."

Tuesday, March 11, 2025

Judge says Meta must defend claim it stripped copyright info from Llama's training fodder; The Register, March 11, 2025

Thomas Claburn , The Register; Judge says Meta must defend claim it stripped copyright info from Llama's training fodder

"A judge has found Meta must answer a claim it allegedly removed so-called copyright management information from material used to train its AI models.

The Friday ruling by Judge Vince Chhabria concerned the case Kadrey et al vs Meta Platforms, filed in July 2023 in a San Francisco federal court as a proposed class action by authors Richard Kadrey, Sarah Silverman, and Christopher Golden, who reckon the Instagram titan's use of their work to train its neural networks was illegal.

Their case burbled along until January 2025 when the plaintiffs made the explosive allegation that Meta knew it used copyrighted material for training, and that its AI models would therefore produce results that included copyright management information (CMI) – the fancy term for things like the creator of a copyrighted work, its license and terms of use, its date of creation, and so on, that accompany copyrighted material.

The miffed scribes alleged Meta therefore removed all of this copyright info from the works it used to train its models so users wouldn’t be made aware the results they saw stemmed from copyrighted stuff."

Saturday, March 8, 2025

Hell is Clearing Permissions: Looking for Lifelines and Deliverance [5,000th post since this blog started in 2008]; IP, AI & OM, March 8, 2025

Kip Currier, IP, AI & OM; Hell is Clearing Permissions: Looking for Lifelines and Deliverance [5,000th post since this blog started in 2008]


Hell is Clearing Permissions: Looking for Lifelines and Deliverance

French Existentialist Jean-Paul Sartre famously opined "L'enfer, c'est les autres (Hell is other people). This post won't be weighing in on the nuances of that declaration by a character in his 1944 play Huit Clos (No Exit) -- although candidates who could easily qualify as diabolic "other people" may spring to mind for you too.

However, thinking about an array of challenging experiences I've had while working on clearing permissions for the use of images and textual material in my forthcoming textbook, Ethics, Information, and Technology, I thought of Sartre's grim observation, with a twist: Hell is clearing permissions.

I've been teaching a Copyright and Fair Use course since 2009, which expanded into an IP and Open Movements around 2015, so I'm neither new to copyright law and fair use issues nor unfamiliar with clearing permissions to use images. Graduate students in the course read Kembrew McLeod's Freedom of Expression and Pat Aufderheide and Peter Jaszi's Reclaiming Fair Use: How To Put Balance Back In Copyright, both of which deal with "permissions culture". I also have my students get familiar with permissions issues via a free comic book, Bound By Law?, composed by Duke Law School's Center for the Study of the Public Domain. The "Bound By Law?" authors, Keith Aoki, James Boyle, and Jennifer Jenkins, chronicle real-life travails faced by creators lawfully trying to exercise fair use while creating new works and balancing licensing costs. One of my favorite examples in the book is the documentary film makers who happen to capture images from an episode of The Simpsons displayed on a TV set while filming what goes on in the backstage lives of stagehands working on The Wagner Ring Cycle opera.

Yet, despite fairly significant copyright and fair use knowledge, as well as frequently participating in copyright webinars and trainings, this is the first time I've worked on clearing permissions for a book of my own. The experiences have been eye-opening to say the least. Two insights and "needs" continue to jump out at me: 

  • (1) the need for more responsive, user-friendly, and expedient ways to clear permissions, and 
  • (2) the need for more accessible and readily understandable information sources to aid authors in the do's and don'ts of clearing permissions.

I do need to acknowledge the many contributions that copyright and fair use scholars Pat Aufderheide and Peter Jaszi, mentioned above, have made in bringing together collaborative groups that have created "Best Practices" primers for a number of areas, such as their 2012 Code of Best Practices in Fair Use for Academic and Research Libraries.

Much more can be done, though, to help newer authors and creators, as well as seasoned pros, to navigate hurdles and potential pitfalls of securing permission to use images. Information professionals -- librarians and other staff within libraries, archives, and museums, for example -- are well-equipped and positioned to use their unique skill sets to help creators to successfully maneuver through clearing permissions-related "obstacle courses".

In future posts, I'll share insights, lessons learned, and tips on mitigating "hellish" experiences and moving from uncertain "limbo" to more clarity on image permissions.

Saturday, March 1, 2025

Prioritise artists over tech in AI copyright debate, MPs say; The Guardian, February 26, 2025

, The Guardian; Prioritise artists over tech in AI copyright debate, MPs say

"Two cross-party committees of MPs have urged the government to prioritise ensuring that creators are fairly remunerated for their creative work over making it easy to train artificial intelligence models.

The MPs argued there needed to be more transparency around the vast amounts of data used to train generative AI models, and urged the government not to press ahead with plans to require creators to opt out of having their data used.

The government’s preferred solution to the tension between AI and copyright law is to allow AI companies to train the models on copyrighted work by giving them an exception for “text and data mining”, while giving creatives the opportunity to opt out through a “rights reservation” system.

The chair of the culture, media and sport committee, Caroline Dinenage, said there had been a “groundswell of concern from across the creative industries” in response to the proposals, which “illustrates the scale of the threat artists face from artificial intelligence pilfering the fruits of their hard-earned success without permission”.

She added that making creative works “fair game unless creators say so” was akin to “burglars being allowed into your house unless there’s a big sign on your front door expressly telling them that thievery isn’t allowed”."


Thursday, February 27, 2025

An AI Maker Was Just Found Liable for Copyright Infringement. What Does This Portend for Content Creators and AI Makers?; The Federalist Society, February 25, 2025

  , The Federalist Society; An AI Maker Was Just Found Liable for Copyright Infringement. What Does This Portend for Content Creators and AI Makers?

"In a case decided on February 11, the makers of generative AI (GenAI), such as ChatGPT, lost the first legal battle in the war over whether they commit copyright infringement by using the material of others as training data without permission. The case is called Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc.

If other courts follow this ruling, the cost of building and selling GenAI services will dramatically increase. Such businesses are already losing money.

The ruling could also empower content creators, such as writers, to deny the use of their material to train GenAIs or to demand license fees. Some creators might be unwilling to license use of their material for training AIs due to fear that GenAI will destroy demand for their work."

Wednesday, February 26, 2025

Jeff Bezos is muzzling the Washington Post’s opinion section. That’s a death knell; The Guardian, February 26, 2025

 , The Guardian ; Jeff Bezos is muzzling the Washington Post’s opinion section. That’s a death knell

"Owners and publishers of news organizations often exert their will on opinion sections. It would be naive to think otherwise.

But a draconian announcement this week by Jeff Bezos, the Washington Postowner, goes far beyond the norm.

The billionaire declared that only opinions that support “personal liberties” and “free markets” will be welcome in the opinion pages of the Post.

“Viewpoints opposing those pillars will be left to be published by others,” he added.

The paper’s top opinion editor, David Shipley, couldn’t get on board with those restrictions. He immediately – and appropriately – resigned.

Especially in the light of the billionaire’s other blatant efforts to cozy up to Donald Trump, Bezos’s move is more than a gut punch; it’s more like a death knell for the once-great news organization he bought in 2013...

What is clear is that Bezos no longer wants to own an independent news organization. He wants a megaphone and a political tool that will benefit his own commercial interests.

It’s appalling. And, if you care about the role of the press in America’s democracy, it’s tragic.

“What Bezos is doing today runs counter to what he said, and actually practiced, during my tenure at the Post,” Martin Baron, the paper’s executive editor until 2021 and the author of the 2023 memoir Collision of Power: Trump, Bezos and the Washington Post, told me in an email Wednesday.

“I have always been grateful for how he stood up for the Post and an independent press against Trump’s constant threats to his business interest,” Baron said. “Now, I couldn’t be more sad and disgusted.”...

This outrageous move will enrage them. I foresee a mass subscriber defection from an outlet already deep in red ink; that must be something businessman Bezos is willing to live with.

He must also be willing to live with hypocrisy.

“Bezos argues for personal liberties. But his news organization now will forbid views other than his own in its opinion section,” Baron pointed out, recalling that it was only weeks ago when the Post described itself in an internal mission statement as intended for “all of America”.

“Now,” Baron noted, “its opinion pages will be open to only some of America, those who think exactly as he does.”

It’s all about getting on board with Trump, to whose inauguration Bezos – through Amazon, the company he co-founded – contributed a million dollars. That allowed him a prime seat, along with others of his oligarchical ilk."