Saturday, July 5, 2025

Two Courts Rule On Generative AI and Fair Use — One Gets It Right; Electronic Frontier Foundation (EFF), June 26, 2025

TORI NOBLE, Electronic Frontier Foundation (EFF); Two Courts Rule On Generative AI and Fair Use — One Gets It Right

 "Gen-AI is spurring the kind of tech panics we’ve seen before; then, as now, thoughtful fair use opinions helped ensure that copyright law served innovation and creativity. Gen-AI does raise a host of other serious concerns about fair labor practices and misinformation, but copyright wasn’t designed to address those problems. Trying to force copyright law to play those roles only hurts important and legal uses of this technology.

In keeping with that tradition, courts deciding fair use in other AI copyright cases should look to Bartz, not Kadrey."

Ousted US copyright chief argues Trump did not have power to remove her; The Register, July 4, 2025

 Lindsay Clark, The Register; Ousted US copyright chief argues Trump did not have power to remove her

"The White House said the power to remove is aligned with the power to appoint. If there is no Librarian of Congress and the president cannot designate an acting librarian, the president's removal authority extends to inferior officers like the register of copyrights, it argued.

Perlmutter was expunged from office a few days after Librarian of Congress Carla Hayden was also shown the door. Hayden was later replaced by deputy attorney general Todd Blanche and Perlmutter by deputy attorney general Paul Perkins.

In the latest filing this week, Perlmutter's legal team said the administration's claim that it had the power to remove her from an office appointed by the Library of Congress employed a "novel constitutional theory" and "sweeping assertions of power."

The Copyright Office is housed in the Library of Congress, and the librarian oversees the Copyright Office head directly, Perlmutter said. Her filing argued that "neither the law nor common sense requires" that the court should "should stand idly by and do nothing while [the Trump administration] wields unprecedented, and unlawful, authority.""

Thursday, July 3, 2025

Cloudflare Sidesteps Copyright Issues, Blocking AI Scrapers By Default; Forbes, July 2, 2025

Emma Woollacott , Forbes; Cloudflare Sidesteps Copyright Issues, Blocking AI Scrapers By Default

"IT service management company Cloudflare is striking back on behalf of content creators, blocking AI scrapers by default.

Web scrapers are bots that crawl the internet, collecting and cataloguing content of all types, and are used by AI firms to collect material that can be used to train their models.

Now, though, Cloudflare is allowing website owners to choose if they want AI crawlers to access their content, and decide how the AI companies can use it. They can opt to allow crawlers for certain purposes—search, for example—but block others. AI companies will have to obtain explicit permission from a website before scraping."

2012 Video of Bill Moyers on the Freedom to Read and the "Bane of Banning Books"; Ethics, Info, Tech: Contested Voices, Values, Spaces, July 3, 2025

Kip Currier; 2012 Video of Bill Moyers on the Freedom to Read and the "Bane of Banning Books"

Nobody writes more illuminating "I-didn't-know-THAT-about-that-person" obituaries than the New York Times. (I didn't know, for example, that Moyers was an ordained Baptist minister.) And, true to form, the Times has an excellent obituary detailing the service-focused life of Bill Moyers, who passed away on June 26, 2025 at the age of 91. 

The moment I learned of his death, my mind went to a 3-minute video clip of Moyers that I've continued to use in a graduate ethics course lecture I give on Intellectual Freedom and Censorship. The clip is from 2012 but the vital importance of libraries and the freedom to read that Moyers extolls is as timely and essential as ever, given the explosion of book bans and censorship besetting the U.S. right now.

Below is a description of the video clip and this is the video link:

"The Bane of Banned Books

September 25, 2012

In honor of the 30th anniversary of the American Library Association’s “Banned Books Week,” Bill talks about the impact libraries have had on his youth, his dismay over book challenges in modern times, and why censorship is the biggest enemy of truth."

https://billmoyers.com/content/the-bane-of-banned-books/

Wednesday, July 2, 2025

Fair Use or Foul Play? The AI Fair Use Copyright Line; The National Law Review, July 2, 2025

 Jodi Benassi of McDermott Will & Emery  , The National Law Review; Fair Use or Foul Play? The AI Fair Use Copyright Line

"Practice note: This is the first federal court decision analyzing the defense of fair use of copyrighted material to train generative AI. Two days after this decision issued, another Northern District of California judge ruled in Kadrey et al. v. Meta Platforms Inc. et al., Case No. 3:23-cv-03417, and concluded that the AI technology at issue in his case was transformative. However, the basis for his ruling in favor of Meta on the question of fair use was not transformation, but the plaintiffs’ failure “to present meaningful evidence that Meta’s use of their works to create [a generative AI engine] impacted the market” for the books."

Eminem, AI and me: why artists need new laws in the digital age; The Guardian, July 2, 2025

 , The Guardian; Eminem, AI and me: why artists need new laws in the digital age

"Song lyrics, my publisher informs me, are subject to notoriously strict copyright enforcement and the cost to buy the rights is often astronomical. Fat chance as well, then, of me quoting Eminem to talk about how Lose Yourself seeped into the psyche of a generation when he rapped: “You only get one shot, do not miss your chance to blow, this opportunity comes once in a lifetime.”

Oh would it be different if I were an AI company with a large language model (LLM), though. I could scrape from the complete discography of the National and Eminem, and the lyrics of every other song ever written. Then, when a user prompted something like, “write a rap in the style of Eminem about losing money, and draw inspiration from the National’s Bloodbuzz Ohio”, my word correlation program – with hundreds of millions of paying customers and a market capitalisation worth tens if not hundreds of billions of dollars – could answer:

“I still owe money to the money to the money I owe,

But I spit gold out my throat when I flow,

So go tell the bank they can take what they like

I already gave my soul to the mic.”

And that, according to rulings last month by the US courts, is somehow “fair use” and is perplexingly not copyright infringement at all, despite no royalties having been paid to anyone in the process."

Evangelical Report Says AI Needs Ethics; Christianity Today, July/August 2025

 

DANIEL SILLIMAN, Christianity Today; Evangelical Report Says AI Needs Ethics

"The Swiss Evangelical Alliance published a 78-page report on the ethics of artificial intelligence, calling on Christians to “help reduce the misuse of AI” and “set an example in the use of AI by demonstrating how technology can be used responsibly and for the benefit of all.” Seven people worked on the paper, including two theologians, several software engineers and computer science experts, a business consultant, and a futurist. They rejected the idea that Christians should close themselves off to AI, as that would not do anything to mitigate the risks of the developing technology. The group concluded that AI has a lot of potential to do good, if given ethical boundaries and shaped by Christian values such as honesty, integrity, and charity."

Tuesday, July 1, 2025

Inside the battle for control of the Library of Congress; Federal News Network, July 1, 2025

Terry Gerton , Federal News Network; Inside the battle for control of the Library of Congress

"Terry Gerton I’m speaking with Kevin Kosar. He’s a senior fellow at the American Enterprise Institute. So those are interesting theories. And as you mentioned though, the library is a research library, not a lending library. So AI is not going to train itself on printed books. It needs electronic information. What is the impact on the day-to-day operations of the library and the copyright office?

Kevin Kosar Well, right now, certainly, it’s a little anxiety-provoking for people at the Library of Congress, this kind of peculiar state of, are we suddenly going to find ourselves answering to a new boss in the form of the president? They are more than aware of what’s happened at other executive agencies where the president has sent in people from the Department of Government Efficiency and started turning off people’s computers and telling them not to come into work and canceling contracts and doing any number of other things that are, you know, hugely disruptive to workers’ day-to-day life. So there’s that anxiety there. And if this move by the Trump administration plays out, it’s really hard to see what could ultimately occur. One thing that that’s clear to me is that if you have presidential control of the Library of Congress, then the Congressional Research Service is doomed. For those listeners out there who are not familiar with the Congressional Research Service, this is Congress’ think tank. This is about 600 individual civil servants whose job is to provide nonpartisan research, analysis and facts to legislators and their staff to help them better do their jobs. And if you have a president who takes over the library, that president can point the head of the Congressional Research Service and turn it into basically a presidential tool, which would make it useless.

Terry Gerton And the administration has sort of already said that it puts no stock in CRS’s products."

KY library book challenges rose 1,000% in 2024. That’s not a typo. What happened?; Lexington Herald Leader, June 30, 2025

 John Cheves , Lexington Herald Leader; KY library book challenges rose 1,000% in 2024. That’s not a typo. What happened?

"Challenges to Kentucky public library books soared by 1,061% last year, rising from 26 incidents in 2023 to 302 incidents in 2024, according to a recently released state report. That eye-popping number is buried in small type at the bottom of page six of the annual Statistical Report of Kentucky Public Libraries, published in April by the Kentucky Department of Libraries and Archives."

The problems with California’s pending AI copyright legislation; Brookings, June 30, 2025

, Brookings; The problems with California’s pending AI copyright legislation

 "California’s pending bill, AB-412, is a well-intentioned but problematic approach to addressing artificial intelligence (AI) and copyright currently moving through the state’s legislature. If enacted into law, it would undermine innovation in generative AI (GenAI) not only in California but also nationally, as it would impose onerous requirements on both in-state and out-of-state developers that make GenAI models available in California. 

The extraordinary capabilities of GenAI are made possible by the use of extremely large sets of training data that often include copyrighted content. AB-412 arose from the very reasonable concerns that rights owners have in understanding when and how their content is being used for building GenAI models. But the bill imposes a set of unduly burdensome and unworkable obligations on GenAI developers. It also favors large rights owners, which will be better equipped than small rights owners to pursue the litigation contemplated by the bill."


The Court Battles That Will Decide if Silicon Valley Can Plunder Your Work; Slate, June 30, 2025

 BY  , Slate; The Court Battles That Will Decide if Silicon Valley Can Plunder Your Work

"Last week, two different federal judges in the Northern District of California made legal rulings that attempt to resolve one of the knottiest debates in the artificial intelligence world: whether it’s a copyright violation for Big Tech firms to use published books for training generative bots like ChatGPT. Unfortunately for the many authors who’ve brought lawsuits with this argument, neither decision favors their case—at least, not for now. And that means creators in all fields may not be able to stop A.I. companies from using their work however they please...

What if these copyright battles are also lost? Then there will be little in the way of stopping A.I. startups from utilizing all creative works for their own purposes, with no consideration as to the artists and writers who actually put in the work. And we will have a world blessed less with human creativity than one overrun by second-rate slop that crushes the careers of the people whose imaginations made that A.I. so potent to begin with."

AI companies start winning the copyright fight; The Guardian, July 1, 2025

 , The Guardian; AI companies start winning the copyright fight

"The lawsuits over AI-generated text were filed first, and, as their rulings emerge, the next question in the copyright fight is whether decisions about one type of media will apply to the next.

“The specific media involved in the lawsuit – written works versus images versus videos versus audio – will certainly change the fair-use analysis in each case,” said John Strand, a trademark and copyright attorney with the law firm Wolf Greenfield. “The impact on the market for the copyrighted works is becoming a key factor in the fair-use analysis, and the market for books is different than that for movies.”

To Strand, the cases over images seem more favorable to copyright holders, as the AI models are allegedly producing images identical to the copyrighted ones in the training data.

A bizarre and damning fact was revealed in the Anthropic ruling, too: the company had pirated and stored some 7m books to create a training database for its AI. To remediate its wrongdoing, the company bought physical copies and scanned them, digitizing the text. Now the owner of 7m physical books that no longer held any utility for it, Anthropic destroyed them. The company bought the books, diced them up, scanned the text and threw them away, Ars Technica reports. There are less destructive ways to digitize books, but they are slower. The AI industry is here to move fast and break things.

Anthropic laying waste to millions of books presents a crude literalization of the ravenous consumption of content necessary for AI companies to create their products."

US Supreme Court to review billion-dollar Cox Communications copyright case; Reuters, June 30, 2025

, Reuters; US Supreme Court to review billion-dollar Cox Communications copyright case

 "The U.S. Supreme Court agreed on Monday to decide a copyright dispute between Cox Communications and a group of music labels following a judicial decision that threw out a $1 billion jury verdict against the internet service provider over alleged piracy of music by Cox customers.

The justices took up Cox's appeal of the lower court's decision that it was still liable for copyright infringement by users of its internet service despite the decision to overturn the verdict...

Cox spokesperson Todd Smith said the company was pleased that the Supreme Court "decided to address these significant copyright issues that could jeopardize internet access for all Americans and fundamentally change how internet service providers manage their networks."...

The labels appealed the 4th Circuit's decision that Cox did not have vicarious liability, a legal doctrine in which a party is found to have indirect liability for the actions of another party, in this case. The labels told the Supreme Court that the circuit court's decision was out of line with other decisions by federal appeals courts on vicarious liability."

Hollywood Confronts AI Copyright Chaos in Washington, Courts; The Wall Street Journal, July 1, 2025

Amrith Ramkumar,  Jessica Toonkel, The Wall Street Journal; Hollywood Confronts AI Copyright Chaos in Washington, Courts

Technology firms say using copyrighted materials to train AI models is key to America’s success; creatives want their work protected

Monday, June 30, 2025

Carla Hayden, former Librarian of Congress, speaks on her dismissal, the future of libraries at Philadelphia event; WHYY, June 29, 2025

  Emily Neil, WHYY ; Carla Hayden, former Librarian of Congress, speaks on her dismissal, the future of libraries at Philadelphia event

"Former Librarian of Congress Carla Hayden spoke at the Free Library of Philadelphia Parkway Central Branch on Saturday night, where she sat down for a fireside chat with Ashley Jordan, president and CEO of the African American Museum in Philadelphia...

In his introductory remarks, Kelly Richards, president and director of the Free Library of Philadelphia, said that Hayden has always been a “tireless advocate” for the library systems throughout her career. He said libraries are not just “repositories of knowledge” in a democratic society, but “vibrant centers of community life, education and inclusion.”

“Libraries have a reputation for being a quiet place, but not tonight,” Richards said, as audience members gave Hayden and Jordan a standing ovation when they entered the stage."

The US Copyright Office is wrong about artificial intelligence; The Hill, June 30, 2025

THINH H. NGUYEN AND DEREK E. BAMBAUER, The Hill ; The US Copyright Office is wrong about artificial intelligence

"AI is too important to allow copyright to impede its progress, especially as America seeks to maintain its global competitiveness in tech innovation."

Sunday, June 29, 2025

An AI firm won a lawsuit for copyright infringement — but may face a huge bill for piracy; Los Angeles Times, June 27, 2025

 Michael Hiltzik , Los Angeles Times; An AI firm won a lawsuit for copyright infringement — but may face a huge bill for piracy


[Kip Currier: Excellent informative overview of some of the principal issues, players, stakes, and recent decisions in the ongoing AI copyright legal battles. Definitely worth 5-10 minutes of your time to read and reflect on.

A key take-away, derived from Judge Vince Chhabria's decision in last week's Meta win, is that:

Artists and authors can win their copyright infringement cases if they produce evidence showing the bots are affecting their market. Chhabria all but pleaded for the plaintiffs to bring some such evidence before him: 

“It’s hard to imagine that it can be fair use to use copyrighted books...to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books.” 

But “the plaintiffs never so much as mentioned it,” he lamented.

https://www.latimes.com/business/story/2025-06-27/an-ai-firm-won-a-lawsuit-over-copyright-infringement-but-may-face-a-huge-bill-for-piracy]


[Excerpt]

"Anthropic had to acknowledge a troubling qualification in Alsup’s order, however. Although he found for the company on the copyright issue, he also noted that it had downloaded copies of more than 7 million books from online “shadow libraries,” which included countless copyrighted works, without permission. 

That action was “inherently, irredeemably infringing,” Alsup concluded. “We will have a trial on the pirated copies...and the resulting damages,” he advised Anthropic ominously: Piracy on that scale could expose the company to judgments worth untold millions of dollars...

“Neither case is going to be the last word” in the battle between copyright holders and AI developers, says Aaron Moss, a Los Angeles attorney specializing in copyright law. With more than 40 lawsuits on court dockets around the country, he told me, “it’s too early to declare that either side is going to win the ultimate battle.”...

With billions of dollars, even trillions, at stake for AI developers and the artistic community at stake, no one expects the law to be resolved until the issue reaches the Supreme Court, presumably years from now...

But Anthropic also downloaded copies of more than 7 million books from online “shadow libraries,” which include untold copyrighted works without permission. 

Alsup wrote that Anthropic “could have purchased books, but it preferred to steal them to avoid ‘legal/practice/business slog,’” Alsup wrote. (He was quoting Anthropic co-founder and CEO Dario Amodei.)...

Artists and authors can win their copyright infringement cases if they produce evidence showing the bots are affecting their market."...

The truth is that the AI camp is just trying to get out of paying for something instead of getting it for free. Never mind the trillions of dollars in revenue they say they expect over the next decade — they claim that licensing will be so expensive it will stop the march of this supposedly historic technology dead in its tracks.

Chhabria aptly called this argument “nonsense.” If using books for training is as valuable as the AI firms say they are, he noted, then surely a market for book licensing will emerge. That is, it will — if the courts don’t give the firms the right to use stolen works without compensation."

ACM FAccT ACM Conference on Fairness, Accountability, and Transparency; June 23-26, 2025, Athens, Greece

 

ACM FAccT

ACM Conference on Fairness, Accountability, and Transparency

A computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.

"Algorithmic systems are being adopted in a growing number of contexts, fueled by big data. These systems filter, sort, score, recommend, personalize, and otherwise shape human experience, increasingly making or informing decisions with major impact on access to, e.g., credit, insurance, healthcare, parole, social security, and immigration. Although these systems may bring myriad benefits, they also contain inherent risks, such as codifying and entrenching biases; reducing accountability, and hindering due process; they also increase the information asymmetry between individuals whose data feed into these systems and big players capable of inferring potentially relevant information.

ACM FAccT is an interdisciplinary conference dedicated to bringing together a diverse community of scholars from computer science, law, social sciences, and humanities to investigate and tackle issues in this emerging area. Research challenges are not limited to technological solutions regarding potential bias, but include the question of whether decisions should be outsourced to data- and code-driven computing systems. We particularly seek to evaluate technical solutions with respect to existing problems, reflecting upon their benefits and risks; to address pivotal questions about economic incentive structures, perverse implications, distribution of power, and redistribution of welfare; and to ground research on fairness, accountability, and transparency in existing legal requirements." 

Saturday, June 28, 2025

Global South voices ‘marginalised in AI Ethics’; Gates Cambridge, June 27, 2025

Gates Cambridge; Global South voices ‘marginalised in AI Ethics’

"A Gates Cambridge Scholar is first author of a paper how AI Ethics is sidelining Global South voices, reinforcing marginalisation.

The study, Distributive Epistemic Injustice in AI Ethics: A Co-productionist Account of Global North-South Politics in Knowledge Production, was published by the Association for Computing Machinery and is based on a study of nearly 6,000 AI Ethics publications between 1960 and 2024. Its first author is Abdullah Hasan Safir [2024 – pictured above], who is doing a PhD in Interdisciplinary Design. Other co-authors include Gates Cambridge Scholars Ramit Debnath[2018] and Kerry McInerney [2017].

The findings were recently presented at the ACM’s FAccT conference, considered one of the top AI Ethics conferences in the world. They show that experts from the Global North currently legitimise their expertise in AI Ethics through dynamic citational and collaborative practices in knowledge production within the field, including co-citation and institutional of AI Ethics."

The Anthropic Copyright Ruling Exposes Blind Spots on AI; Bloomberg, June 26, 2025

  , Bloomberg; The Anthropic Copyright Ruling Exposes Blind Spots on AI


[Kip Currier: It's still early days in the AI copyright legal battles underway between AI tech companies and everyone else whose training data was "scarfed up" to enable the former to create lucrative AI tools and products. But cases like this week's Anthropic lawsuit win and another suit won by Meta (with some issues still to be adjudicated regarding the use of pirated materials as AI training data) are finally now giving us some more discernible tea leaves" and "black letter law" as to how courts are likely to rule vis-a-vis AI inputs.

This week being the much ballyhooed 50th anniversary of the so-called "1st summer blockbuster flick" Jaws ("you're gonna need a bigger boat"), these rulings make me think we the public may need a bigger copyright law schema that sets out protections for the creatives making the fuel that enables stratospherically profitable AI innovations. The Jaws metaphor may be a bit on-the-nose, but one can't help but view AI tech companies akin to rapacious sharks that are imperiling the financial survival and long-standing business models of human creators.

As touched on in this Bloomberg article, too, there's a moral argument that what AI tech folks have done with the uncompensated use of creative works, without permission, doesn't mean that it's ethically justifiable simply because a court may say it's legal. Or that these companies shouldn't be required by updated federal copyright legislation and licensing frameworks to fairly compensate creators for the use of their copyrighted works. After all, billionaire tech oligarchs like Zuckerberg, Musk, and Altman would never allow others to do to them what they've done to creatives with impunity and zero contrition.

Are you listening, Congress?

Or are all of you in the pockets of AI tech company lobbyists, rather than representing the needs and interests of all of your constituents and not just the billionaire class.] 


[Excerpt]

"In what is shaping up to be a long, hard fight over the use of creative works, round one has gone to the AI makers. In the first such US decision of its kind, District Judge William Alsup said Anthropic’s use of millions of books to train its artificial-intelligence model, without payment to the sources, was legal under copyright law because it was “transformative — spectacularly so.”...

If a precedent has been set, as several observers believe, it stands to cripple one of the few possible AI monetization strategies for rights holders, which is to sell licenses to firms for access to their work. Some of these deals have already been made while the “fair use” question has been in limbo, deals that emerged only after the threat of legal action. This ruling may have just taken future deals off the table...

Alsup was right when he wrote that “the technology at issue was among the most transformative many of us will see in our lifetimes.”...

But that doesn’t mean it shouldn’t pay its way. Nobody would dare suggest Nvidia Corp. CEO Jensen Huang hand out his chips free. No construction worker is asked to keep costs down by building data center walls for nothing. Software engineers aren’t volunteering their time to Meta Platforms Inc. in awe of Mark Zuckerberg’s business plan — they instead command salaries of $100 million and beyond. 

Yet, as ever, those in the tech industry have decided that creative works, and those who create them, should be considered of little or no value and must step aside in service of the great calling of AI — despite being every bit as vital to the product as any other factor mentioned above. As science-fiction author Harlan Ellison said in his famous sweary rant, nobody ever wants to pay the writer if they can get away with it. When it comes to AI, paying creators of original work isn’t impossible, it’s just inconvenient. Legislators should leave companies no choice."

Friday, June 27, 2025

No One Is in Charge at the US Copyright Office; Wired, June 27, 2025

"It’s a tumultuous time for copyright in the United States, with dozens of potentially economy-shaking AI copyright lawsuits winding through the courts. It’s also the most turbulent moment in the US Copyright Office’s history. Described as “sleepy” in the past, the Copyright Office has taken on new prominence during the AI boom, issuing key rulings about AI and copyright. It also hasn’t had a leader in more than a month...

As the legality of the ouster is debated, the reality within the office is this: There’s effectively nobody in charge. And without a leader actually showing up at work, the Copyright Office is not totally business-as-usual; in fact, there’s debate over whether the copyright certificates it’s issuing could be challenged."

Getty drops copyright allegations in UK lawsuit against Stability AI; AP, June 25, 2025

 KELVIN CHAN, AP; Getty drops copyright allegations in UK lawsuit against Stability AI

"Getty Images dropped copyright infringement allegations from its lawsuit against artificial intelligence company Stability AI as closing arguments began Wednesday in the landmark case at Britain’s High Court. 

Seattle-based Getty’s decision to abandon the copyright claim removes a key part of its lawsuit against Stability AI, which owns a popular AI image-making tool called Stable Diffusion. The two have been facing off in a widely watched court case that could have implications for the creative and technology industries."

Denmark to tackle deepfakes by giving people copyright to their own features; The Guardian, June 27, 2025

  , The Guardian; Denmark to tackle deepfakes by giving people copyright to their own features

"The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice.

The Danish government said on Thursday it would strengthen protection against digital imitations of people’s identities with what it believes to be the first law of its kind in Europe."

Wednesday, June 25, 2025

Judge dismisses authors’ copyright lawsuit against Meta over AI training; AP, June 25, 2025

 MATT O’BRIEN AND BARBARA ORTUTAY, AP; Judge dismisses authors’ copyright lawsuit against Meta over AI training

"Although Meta prevailed in its request to dismiss the case, it could turn out to be a pyrrhic victory. In his 40-page ruling, Chhabria repeatedly indicated reasons to believe that Meta and other AI companies have turned into serial copyright infringers as they train their technology on books and other works created by humans, and seemed to be inviting other authors to bring cases to his court presented in a manner that would allow them to proceed to trial.

The judge scoffed at arguments that requiring AI companies to adhere to decades-old copyright laws would slow down advances in a crucial technology at a pivotal time. “These products are expected to generate billions, even trillions of dollars for the companies that are developing them. If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders for it.”

Ball State University Libraries Launches Research Guide on Ethical AI Use; Ball State University, June 24, 2025

 Ball State University; Ball State University Libraries Launches Research Guide on Ethical AI Use

"In an era in which artificial intelligence tools are rapidly reshaping how we access and share information, Ball State University Libraries has introduced a new research guide to help students, faculty, staff, and community members use AI more thoughtfully and effectively.

The interactive guide, now available at bsu.libguides.com, equips users with foundational skills to assess the credibility, accuracy, and ethical implications of generative AI tools like ChatGPT and image generators. Through five short videos and practical examples, the guide teaches users to identify potential misinformation, recognize AI-generated bias, and apply AI output in meaningful and responsible ways.

Key learning outcomes include:"

Tuesday, June 24, 2025

Anthropic’s AI copyright ‘win’ is more complicated than it looks; Fast Company, June 24, 2025

CHRIS STOKEL-WALKER, Fast Company;Anthropic’s AI copyright ‘win’ is more complicated than it looks

"And that’s the catch: This wasn’t an unvarnished win for Anthropic. Like other tech companies, Anthropic allegedly sourced training materials from piracy sites for ease—a fact that clearly troubled the court. “This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,” Alsup wrote, referring to Anthropic’s alleged pirating of more than 7 million books.

That alone could carry billions in liability, with statutory damages starting at $750 per book—a trial on that issue is still to come.

So while tech companies may still claim victory (with some justification, given the fair use precedent), the same ruling also implies that companies will need to pay substantial sums to legally obtain training materials. OpenAI, for its part, has in the past argued that licensing all the copyrighted material needed to train its models would be practically impossible.

Joanna Bryson, a professor of AI ethics at the Hertie School in Berlin, says the ruling is “absolutely not” a blanket win for tech companies. “First of all, it’s not the Supreme Court. Secondly, it’s only one jurisdiction: The U.S.,” she says. “I think they don’t entirely have purchase over this thing about whether or not it was transformative in the sense of changing Claude’s output.”"

The copyright war between the AI industry and creatives; Financial Times, June 23, 2025

 , Financial Times ; The copyright war between the AI industry and creatives

"One is that the government itself estimates that “creative industries generated £126bn in gross value added to the economy [5 per cent of GDP] and employed 2.4 million people in 2022”. It is at the very least an open question whether the value added of the AI industry will ever be of a comparable scale in this country. Another is that the creative industries represent much of the best of what the UK and indeed humanity does. The idea of handing over its output for free is abhorrent...

Interestingly, for much of the 19th century, the US did not recognise international copyright at all in its domestic law. Anthony Trollope himself complained fiercely about the theft of the copyright over his books."

Anthropic wins key US ruling on AI training in authors' copyright lawsuit; Reuters, June 24, 2025

, Reuters; Anthropic wins key US ruling on AI training in authors' copyright lawsuit

 "A federal judge in San Francisco ruled late on Monday that Anthropic's use of books without permission to train its artificial intelligence system was legal under U.S. copyright law.

Siding with tech companies on a pivotal question for the AI industry, U.S. District Judge William Alsup said Anthropic made "fair use" of books by writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson to train its Claude large language model.

Alsup also said, however, that Anthropic's copying and storage of more than 7 million pirated books in a "central library" infringed the authors' copyrights and was not fair use. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement."

Study: Meta AI model can reproduce almost half of Harry Potter book; Ars Technica, June 20, 2025

TIMOTHY B. LEE  , Ars Techcnica; Study: Meta AI model can reproduce almost half of Harry Potter book

"In recent years, numerous plaintiffs—including publishers of books, newspapers, computer code, and photographs—have sued AI companies for training models using copyrighted material. A key question in all of these lawsuits has been how easily AI models produce verbatim excerpts from the plaintiffs’ copyrighted content.

For example, in its December 2023 lawsuit against OpenAI, The New York Times Company produced dozens of examples where GPT-4 exactly reproduced significant passages from Times stories. In its response, OpenAI described this as a “fringe behavior” and a “problem that researchers at OpenAI and elsewhere work hard to address.”

But is it actually a fringe behavior? And have leading AI companies addressed it? New research—focusing on books rather than newspaper articles and on different companies—provides surprising insights into this question. Some of the findings should bolster plaintiffs’ arguments, while others may be more helpful to defendants.

The paper was published last month by a team of computer scientists and legal scholars from Stanford, Cornell, and West Virginia University. They studied whether five popular open-weight models—three from Meta and one each from Microsoft and EleutherAI—were able to reproduce text from Books3, a collection of books that is widely used to train LLMs. Many of the books are still under copyright."

Copyright Cases Should Not Threaten Chatbot Users’ Privacy; Electronic Frontier Foundation (EFF), June 23, 2025

TORI NOBLE, Electronic Frontier Foundation (EFF); Copyright Cases Should Not Threaten Chatbot Users’ Privacy

"Like users of all technologies, ChatGPT users deserve the right to delete their personal data. Nineteen U.S. States, the European Union, and a host of other countries already protect users’ right to delete. For years, OpenAI gave users the option to delete their conversations with ChatGPT, rather than let their personal queries linger on corporate servers. Now, they can’t. A badly misguided court order in a copyright lawsuit requires OpenAI to store all consumer ChatGPT conversations indefinitely—even if a user tries to delete them. This sweeping order far outstrips the needs of the case and sets a dangerous precedent by disregarding millions of users’ privacy rights.

The privacy harms here are significant. ChatGPT’s 300+ million users submit over 1 billion messages to its chatbots per dayoften for personal purposes. Virtually any personal use of a chatbot—anything from planning family vacations and daily habits to creating social media posts and fantasy worlds for Dungeons and Dragons games—reveal personal details that, in aggregate, create a comprehensive portrait of a person’s entire life. Other uses risk revealing people’s most sensitive information. For example, tens of millions of Americans use ChatGPT to obtain medical and financial information. Notwithstanding other risks of these uses, people still deserve privacy rights like the right to delete their data. Eliminating protections for user-deleted data risks chilling beneficial uses by individuals who want to protect their privacy."