Wednesday, July 23, 2025

Trump derides copyright and state rules in AI Action Plan launch; Politico, July 23, 2025

 MOHAR CHATTERJEE , Politico; Trump derides copyright and state rules in AI Action Plan launch

"President Donald Trump criticized copyright enforcement efforts and state-level AI regulations Wednesday as he launched the White House’s AI Action Plan on a mission to dominate the industry.

In remarks delivered at a “Winning the AI Race” summit hosted by the All-In Podcast and the Hill and Valley Forum in Washington, Trump said stringent copyright enforcement was unrealistic for the AI industry and would kneecap U.S. companies trying to compete globally, particularly against China.

“You can’t be expected to have a successful AI program when every single article, book or anything else that you’ve read or studied, you’re supposed to pay for,” he said. “You just can’t do it because it’s not doable. ... China’s not doing it.”

Trump’s comments were a riff as his 28-page AI Action Plan did not wade into copyright and administration officials told reporters the issue should be left to the courts to decide.

Trump also signed three executive orders. One will fast track federal permitting, streamline reviews and “do everything possible to expedite construction of all major AI infrastructure projects,” Trump said. Another expands American exports of AI hardware and software. A third order bans the federal government from procuring AI technology “that has been infused with partisan bias or ideological agendas,” as Trump put it...

Trump echoed tech companies’ complaints about state AI laws creating a patchwork of regulation. “You can’t have one state holding you up,” he said. “We need one common sense federal standard that supersedes all states, supersedes everybody.”"

Trump has fired the head of the Library of Congress, but the 225-year-old institution remains a ‘library for all’ – so far; The Conversation, July 23, 2025

 , Associate Professor of Information Science, Drexel University , The Conversation; Trump has fired the head of the Library of Congress, but the 225-year-old institution remains a ‘library for all’ – so far

"A library for all

Following Hayden’s dismissal, Trump appointed Deputy Attorney General Todd Blanche, his former personal lawyer, as acting librarian of Congress. 

Hayden has contended that her dismissal, which occurred alongside other firings of top civil servants, including the national archivist, represents a broad threat to people’s right to easily access free information. 

Democracies are not to be taken for granted,” Hayden said in June. She explained in an interview with CBS that she never had a problem with a presidential administration and is not sure why she was dismissed. 

“And the institutions that support democracy should not be taken for granted,” Hayden added. 

In her final annual report as librarian, Hayden characterized the institution as “truly, a library for all.” So far, even without her leadership, it remains just that."

AI chatbots remain overconfident -- even when they’re wrong; EurekAlert!, July 22, 2025

  CARNEGIE MELLON UNIVERSITY, EurekAlert!; AI chatbots remain overconfident -- even when they’re wrong

"Artificial intelligence chatbots are everywhere these days, from smartphone apps and customer service portals to online search engines. But what happens when these handy tools overestimate their own abilities? 

Researchers asked both human participants and four large language models (LLMs) how confident they felt in their ability to answer trivia questions, predict the outcomes of NFL games or Academy Award ceremonies, or play a Pictionary-like image identification game. Both the people and the LLMs tended to be overconfident about how they would hypothetically perform. Interestingly, they also answered questions or identified images with relatively similar success rates.

However, when the participants and LLMs were asked retroactively how well they thought they did, only the humans appeared able to adjust expectations, according to a study published today in the journal Memory & Cognition.

“Say the people told us they were going to get 18 questions right, and they ended up getting 15 questions right. Typically, their estimate afterwards would be something like 16 correct answers,” said Trent Cash, who recently completed a joint Ph.D. at Carnegie Mellon University in the departments of Social Decision Science and Psychology. “So, they’d still be a little bit overconfident, but not as overconfident.”

“The LLMs did not do that,” said Cash, who was lead author of the study. “They tended, if anything, to get more overconfident, even when they didn’t do so well on the task.”

The world of AI is changing rapidly each day, which makes drawing general conclusions about its applications challenging, Cash acknowledged. However, one strength of the study was that the data was collected over the course of two years, which meant using continuously updated versions of the LLMs known as ChatGPT, Bard/Gemini, Sonnet and Haiku. This means that AI overconfidence was detectable across different models over time.

“When an AI says something that seems a bit fishy, users may not be as skeptical as they should be because the AI asserts the answer with confidence, even when that confidence is unwarranted,” said Danny Oppenheimer, a professor in CMU’s Department of Social and Decision Sciences and coauthor of the study."

Wave of copyright lawsuits hit AI companies like Cambridge-based Suno; WBUR, July 23, 2025

 

 WBUR; Wave of copyright lawsuits hit AI companies like Cambridge-based Suno

"Suno, a Cambridge company that generates AI music, faces multiple lawsuits alleging it illegally trained its model on copyrighted work. Peter Karol of Suffolk Law School and Bhamati Viswanathan of Columbia University Law School's Kernochan Center for Law, Media, and the Arts join WBUR's Morning Edition to explain how the suits against Suno fit into a broader legal battle over the future of creative work.

This segment aired on July 23, 2025. Audio will be available soon."

Tuesday, July 22, 2025

Commentary: A win-win-win path for AI in America; The Post & Courier, July 22, 2025

Keith Kupferschmid, The Post & Courier; Commentary: A win-win-win path for AI in America

"Contrary to claims that these AI training deals are impossible to make at scale, a robust free market is already emerging in which hundreds (if not thousands) of licensed deals between AI companies and copyright owners have been reached. New research shows it is possible to create fully licensed data sets for AI.

No wonder one federal judge recently called claims that licensing is impractical “ridiculous,” given the billions at stake: “If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders.” Just like AI companies don’t dispute that they have to pay for energy, infrastructure, coding teams and the other inputs their operations require, they need to pay for creative works as well.

America’s example to the world is a free-market economy based on the rule of law, property rights and freedom to contract — so, let the market innovate solutions to these new (but not so new) licensing challenges. Let’s construct a pro-innovation, pro-worker approach that replaces the false choice of the AI alarmists with a positive, pro-America pathway to leadership on AI."

Senators Introduce Bill To Restrict AI Companies’ Unauthorized Use Of Copyrighted Works For Training Models; Deadline, July 21, 2025

Ted Johnson , Deadline; Senators Introduce Bill To Restrict AI Companies’ Unauthorized Use Of Copyrighted Works For Training Models

"Sen. Josh Hawley (R-MO) and Sen. Richard Blumenthal (D-CT) introduced legislation on Monday that would restrict AI companies from using copyrighted material in their training models without the consent of the individual owner.

The AI Accountability and Personal Data Protection Act also would allow individuals to sue companies that uses their personal data or copyrighted works without their “express, prior consent.”

The bill addresses a raging debate between tech and content owners, one that has already led to extensive litigation. Companies like OpenAI have argued that the use of copyrighted materials in training models is a fair use, while figures including John Grisham and George R.R. Martin have challenged that notion."

Sunday, July 20, 2025

Judge dismisses Trump’s copyright lawsuit against Bob Woodward and audiobook publisher; CNN, July 18, 2025

, CNN ; Judge dismisses Trump’s copyright lawsuit against Bob Woodward and audiobook publisher

"President Trump’s lawsuit against journalist Bob Woodward and his publishing house Simon & Schuster was dismissed by a federal judge on Friday, the same day that Trump filed a new and unrelated suit against the Wall Street Journal.

US District Judge Paul Gardephe of the Southern District of New York gave Trump’s legal team a month to file another amended complaint. But for now, he dismissed the current iteration and said “it appears unlikely” that Trump “can adequately plead a plausible copyright interest in ‘The Trump Tapes.’”

AI guzzled millions of books without permission. Authors are fighting back.; The Washington Post, July 19, 2025

  , The Washington Post; AI guzzled millions of books without permission. Authors are fighting back.


[Kip Currier: I've written this before on this blog and I'll say it again: technology companies would never allow anyone to freely vacuum up their content and use it without permission or compensation. Period. Full Stop.]


[Excerpt]

"Baldacci is among a group of authors suing OpenAI and Microsoft over the companies’ use of their work to train the AI software behind tools such as ChatGPT and Copilot without permission or payment — one of more than 40 lawsuits against AI companies advancing through the nation’s courts. He and other authors this week appealed to Congress for help standing up to what they see as an assault by Big Tech on their profession and the soul of literature.

They found sympathetic ears at a Senate subcommittee hearing Wednesday, where lawmakers expressed outrage at the technology industry’s practices. Their cause gained further momentum Thursday when a federal judge granted class-action status to another group of authors who allege that the AI firm Anthropic pirated their books.

“I see it as one of the moral issues of our time with respect to technology,” Ralph Eubanks, an author and University of Mississippi professor who is president of the Authors Guild, said in a phone interview. “Sometimes it keeps me up at night.”

Lawsuits have revealed that some AI companies had used legally dubious “torrent” sites to download millions of digitized books without having to pay for them."

Judge Rules Class Action Suit Against Anthropic Can Proceed; Publishers Weekly, July 18, 2025

Jim Milliot , Publishers Weekly; Judge Rules Class Action Suit Against Anthropic Can Proceed

"In a major victory for authors, U.S. District Judge William Alsup ruled July 17 that three writers suing Anthropic for copyright infringement can represent all other authors whose books the AI company allegedly pirated to train its AI model as part of a class action lawsuit.

In late June, Alsup of the Northern District of California, ruled in Bartz v. Anthropic that the AI company's training of its Claude LLMs on authors' works was "exceedingly transformative," and therefore protected by fair use. However, Alsup also determined that the company's practice of downloading pirated books from sites including Books3, Library Genesis, and Pirate Library Mirror (PiLiMi) to build a permanent digital library was not covered by fair use.

Alsup’s most recent ruling follows an amended complaint from the authors looking to certify classes of copyright owners in a “Pirated Books Class” and in a “Scanned Books Class.” In his decision, Alsup certified only a LibGen and PiLiMi Pirated Books Class, writing that “this class is limited to actual or beneficial owners of timely registered copyrights in ISBN/ASIN-bearing books downloaded by Anthropic from these two pirate libraries.”

Alsup stressed that “the class is not limited to authors or author-like entities,” explaining that “a key point is to cover everyone who owns the specific copyright interest in play, the right to make copies, either as the actual or as the beneficial owner.” Later in his decision, Alsup makes it clear who is covered by the ruling: “A beneficial owner...is someone like an author who receives royalties from any publisher’s revenues or recoveries from the right to make copies. Yes, the legal owner might be the publisher but the author has a definite stake in the royalties, so the author has standing to sue. And, each stands to benefit from the copyright enforcement at the core of our case however they then divide the benefit.”"

US authors suing Anthropic can band together in copyright class action, judge rules; Reuters, July 17, 2025

 , Reuters; US authors suing Anthropic can band together in copyright class action, judge rules

"A California federal judge ruled on Thursday that three authors suing artificial intelligence startup Anthropic for copyright infringement can represent writers nationwide whose books Anthropic allegedly pirated to train its AI system.

U.S. District Judge William Alsup said the authors can bring a class action on behalf of all U.S. writers whose works Anthropic allegedly downloaded from "pirate libraries" LibGen and PiLiMi to create a repository of millions of books in 2021 and 2022."

Thursday, July 17, 2025

Libraries Pay More for E-Books. Some States Want to Change That.; The New York Times, July 16, 2025

Erik Ofgang, The New York Times; Libraries Pay More for E-Books. Some States Want to Change That.

Proposed legislation would pressure publishers to adjust borrowing limits and find other ways to widen access. 

"Librarians complain that publishers charge so much to license e-books that it’s busting library budgets and frustrating efforts to provide equitable access to reading materials. Big publishers and many authors say that e-book library access undermines their already struggling business models. Smaller presses are split."

What Book Authors’ AI Copyright Court Losses Mean for the Music Business; Billboard, 7/14/25

RACHEL SCHARF, Billboard ; What Book Authors’ AI Copyright Court Losses Mean for the Music Business

While the first copyright rulings have come out on the side of AI platforms, this is hardly a death knell for the music giants' lawsuits against Suno, Udio and Anthropic, legal experts say. 

The Art (and Legality) of Imitation: Navigating the Murky Waters of Fair Use in AI Training The National Law Review, July 16, 2025

Sarah C. ReasonerAshley N. HigginsonAnita C. MarinelliKimberly A. Berger of Miller Canfield   - Miller Canfield Resources, The National Law Review; The Art (and Legality) of Imitation: Navigating the Murky Waters of Fair Use in AI Training 

"The legal landscape for artificial intelligence is still developing, and no outcome can yet be predicted with any sort of accuracy. While some courts appear poised to accept AI model training as transformative, other courts do not. As AI technology continues to advance, the legal system must adapt to address the unique challenges it presents. Meanwhile, businesses and creators navigating this uncertain terrain should stay informed about legal developments and consider proactive measures to mitigate risks. As we await further rulings and potential legislative action, one thing is clear: the conversation around AI and existing intellectual property protection is just beginning."

Wednesday, July 16, 2025

Musicians brace for impact as Senate vote on public radio looms; The Washington Post, July 15, 2025

 , The Washington Post; Musicians brace for impact as Senate vote on public radio looms

"For the more than 1,000 public radio stations that play independent music, Boilen says the bill is an existential threat...

“All stations would be in trouble of not being able to play music,” NPR president and CEO Katherine Maher said. The CPB spends nearly $20 million on licensing most years, covering an expense Maher said would be impossible for most stations to afford. “Regardless of how big you are, even the largest station in the NPR network and in public radio still operates on a budget of less than $100 million a year.”

Licensing isn’t the only thing threatened by the rescission bill, which also retracts funding from foreign aid programs such as global AIDS prevention and other public media such as PBS."

Can Gen AI and Copyright Coexist?; Harvard Business Review, July 16, 2025

 and , Harvard Business Review ; Can Gen AI and Copyright Coexist?

"We’re experts in the study of digital transformation and have given this issue a lot of thought. We recently served, for example, on a roundtable of 10 economists convened by the U.S. Copyright Office to study the implications of gen AI on copyright policy. We recognize that the two decisions are far from the last word on this topic; both will no doubt be appealed to the Ninth Circuit and then subsequently to the Supreme Court. But in the meantime, we believe there are already many lessons to be learned from these decisions about the implications of gen AI for business—lessons that will be useful for leaders in both the creative industries and gen AI companies."

The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies; Task & Purpose, July 14, 2025

  , Task & Purpose; The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies

"The Pentagon announced Monday it is going to spend almost $1 billion on “agentic AI workflows” from four “frontier AI” companies, including Elon Musk’s xAI, whose flagship Grok appeared to still be declaring itself “MechaHitler” as late as Monday afternoon.

In a press release, the Defense Department’s Chief Digital and Artificial Intelligence Office — or CDAO — said it will cut checks of up to $200 million each to tech giants Anthropic, Google, OpenAI and Musk’s xAI to work on:

  • “critical national security challenges;”
  • “joint mission essential tasks in our warfighting domain;”
  • “DoD use cases.”

The release did not expand on what any of that means or how AI might help. Task & Purpose reached out to the Pentagon for details on what these AI agents may soon be doing and asked specifically if the contracts would include control of live weapons systems or classified information."

Tuesday, July 15, 2025

Research Guides in Focus – Intellectual Property Law: A Beginner’s Guide; Library of Congress, July 15, 2025

 Sarah Friedman, Library of Congress; Research Guides in Focus – Intellectual Property Law: A Beginner’s Guide

"The Law Library of Congress is pleased to announce the publication of the new research guide, Intellectual Property Law: A Beginner’s Guide. This guide provides an overview of resources for researching patent, copyright, and trademark law.

The guide begins with a general explanation of intellectual property, followed by print and online resources for further learning about the subject. There are also tabs for resources specific to patent, copyright, and trademark law. For each area of intellectual property law, we have gathered secondary sources, statutes, regulations, treaties, databases for searching records, case law sources, lists of organizations that can assist with applications for protection, and other online resources.

We hope that this guide will be a valuable resource for researchers seeking to learn more about intellectual property laws, researchers searching for existing patent, copyright, and trademark records, and researchers who want to learn about the processes to apply for protection for their intellectual property. As always, we encourage researchers who have further questions, comments, or feedback about this guide to reach out to us through Ask a Librarian."

Monday, July 14, 2025

Popular Rock Band Demands Trump's DHS Take Down ICE Video Over Copyright Violation: 'And Go F–k Yourselves': "It's obvious that you don't respect Copyright Law"; Latin Times, July 14, 2025

, Latin Times; Popular Rock Band Demands Trump's DHS Take Down ICE Video Over Copyright Violation: 'And Go F–k Yourselves'

"It's obvious that you don't respect Copyright Law"


"The rock band Black Rebel Motorcycle Club (BRMC) is demanding that the US Department of Homeland Security (DHS) remove a recent video that used their recording of "God's Gonna Cut You Down" without permission.

The band made their disapproval of the DHS very clear, accusing the agency of violating not only copyright law, but fundamental constitutional values.

"It's obvious that you don't respect Copyright Law and Artist Rights any more than you respect Habeas Corpus and Due Process rights," the band wrote. "Not to mention the separation of Church and State per the US Constitution."

"For the record, we hereby order @dhsgov to cease and desist the use of our recording and demand that you immediately pull down your video," the statement continued.

"Oh, and go f–k yourselves," they concluded."

Friday, July 11, 2025

AI must have ethical management, regulation protecting human person, Pope Leo says; The Catholic Register, July 11, 2025

 Carol Glatz , The Catholic Register; AI must have ethical management, regulation protecting human person, Pope Leo says

"Pope Leo XIV urged global leaders and experts to establish a network for the governance of AI and to seek ethical clarity regarding its use.

Artificial intelligence "requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency," Cardinal Pietro Parolin, Vatican secretary of state, wrote in a message sent on the pope's behalf.

The message was read aloud by Archbishop Ettore Balestrero, the Vatican representative to U.N. agencies in Geneva, at the AI for Good Summit 2025 being held July 8-11 in Geneva. The Vatican released a copy of the message July 10."

Join Our Livestream: Inside the AI Copyright Battles; Wired, July 11, 2025

Reece Rogers Wired ; Join Our Livestream: Inside the AI Copyright Battles

"WHAT'S GOING ON right now with the copyright battles over artificial intelligence? Many lawsuits regarding generative AI’s training materials were initially filed back in 2023, with decisions just now starting to trickle out. Whether it’s Midjourney generating videos of Disney characters, like Wall-E brandishing a gun, or an exit interview with a top AI lawyer as he left Meta, WIRED senior writer Kate Knibbs has been following this fight for years—and she’s ready to answer your questions.

Bring all your burning questions about the AI copyright battles to WIRED’s next, subscriber-only livestream scheduled for July 16 at 12pm ET / 9am PT, hosted by Reece Rogers with Kate Knibbs. The event will be streamed right here. For subscribers who are not able to join, a replay of the livestream will be available after the event."

Thursday, July 10, 2025

EU's AI code of practice for companies to focus on copyright, safety; Reuters, July 10, 2025

 , Reuters ; EU's AI code of practice for companies to focus on copyright, safety

"The European Commission on Thursday unveiled a draft code of practice aimed at helping firms comply with the European Union's artificial intelligence rules and focused on copyright-protected content safeguards and measures to mitigate systemic risks.

Signing up to the code, which was drawn up by 13 independent experts, is voluntary, but companies that decline to do so will not benefit from the legal certainty provided to a signatory.

The code is part of the AI rule book, which will come into effect in a staggered manner and will apply to Google owner Alphabet, Facebook owner Meta, OpenAI, Anthropic, Mistral and other companies."

EU AI Act at the Crossroads: GPAI Rules, AI Literacy Guidance and Potential Delays; JD Supra, July 8, 2025

 Mark BoothSteven Farmer, Scott Morton , JD Supra; EU AI Act at the Crossroads: GPAI Rules, AI Literacy Guidance and Potential Delays

"The EU AI Act (AI Act), effective since February 2025, introduces a risk-based regulatory framework for AI systems and a parallel regime for general-purpose AI (GPAI) models. It imposes obligations on various actors, including providers, deployers, importers and manufacturers, and requires that organizations ensure an appropriate level of AI literacy among staff. The AI Act also prohibits “unacceptable risk” AI use cases and imposes rigorous requirements on “high-risk” systems. For a comprehensive overview of the AI Act, see our earlier client alert.

As of mid-2025, the implementation landscape is evolving. This update takes stock of where things stand, focusing on: (i) new guidance on the AI literacy obligations for providers and deployers; (ii) the status of the developing a General-Purpose AI Code of Practice and its implications; and (iii) the prospect of delayed enforcement of some of the AI Act’s key provisions."

Microsoft Pledges $4 Billion Toward A.I. Education; The New York Times, July 9, 2025

  , The New York Times; Microsoft Pledges $4 Billion Toward A.I. Education


[Kip Currier: Not one mention of "ethics" or "AI ethics" in this New York Times article.

So, I sent an email to the reporter today (7/10/25):

Dear Natasha Singer,

I was surprised, and actually disconcerted, to not see any mention of "ethics" and "AI ethics" concepts in your article "Microsoft Pledges $4 Billion Toward A.I. Education". Given well-documented concerns of the vital need for ethical guidelines and frameworks vis-a-vis AI by a wide-range of stakeholders (e.g. religious leaders/Rome Call for AI Ethics, the U.N. AI Advisory Body, academics, etc.), I would have expected your reporting to at least have mentioned potential ethical considerations about this Microsoft funding plan, which carries such significant implications for education and societies.

Best wishes,

Kip Currier]

 

[Excerpt]

"Microsoft said on Wednesday that it planned to give more than $4 billion in cash and technology services to train millions of people to use artificial intelligence, amid an intensifying Silicon Valley crusade to embed chatbots into classrooms.

Microsoft, the maker of the Copilot chatbot, said the resources would go to schools, community colleges, technical colleges and nonprofits. The company is also starting a new training program, Microsoft Elevate Academy, to “deliver A.I. education and skilling at scale” and help 20 million people earn certificates in A.I.

“Microsoft will serve as an advocate to ensure that students in every school across the country have access to A.I. education,” Brad Smith, the president of Microsoft, said in an interview on Sunday.

Microsoft did not immediately specify how much of the more than $4 billion the company planned to dispense as grants and how much of it would be in the form of Microsoft A.I. services and cloud computing credits.

The announcement comes as tech companies are racing to train millions of teachers and students on their new A.I. tools. Even so, researchers say it is too soon to tell whether the classroom chatbots will end up improving educational outcomes or eroding important skills like critical thinking.

On Tuesday, the American Federation of Teachers, a union representing 1.8 million members, said it was setting up a national A.I. training center for educators, with $23 million in funding from Microsoft and two other chatbot makers, OpenAI and Anthropic."

Wednesday, July 9, 2025

How the Vatican Is Shaping the Ethics of Artificial Intelligence; American Enterprise Institute, July 7, 2025

 Shane Tews , American Enterprise Institute; How the Vatican Is Shaping the Ethics of Artificial Intelligence

"Father Paolo Benanti is an Italian Catholic priest, theologian, and member of the Third Order Regular of St. Francis. He teaches at the Pontifical Gregorian University and has served as an advisor to both former Pope Francis and current Pope Leo on matters of artificial intelligence and technology ethics within the Vatican.

Below is a lightly edited and abridged transcript of our discussion...

In the Vatican document, you emphasize that AI is just a tool—an elegant one, but it shouldn’t control our thinking or replace human relationships. You mention it “requires careful ethical consideration for human dignity and common good.” How do we identify that human dignity point, and what mechanisms can alert us when we’re straying from it?

I’ll try to give a concise answer, but don’t forget that this is a complex element with many different applications, so you can’t reduce it to one answer. But the first element—one of the core elements of human dignity—is the ability to self-determine our trajectory in life. I think that’s the core element, for example, in the Declaration of Independence. All humans have rights, but you have the right to the pursuit of happiness. This could be the first description of human rights.

In that direction, we could have a problem with this kind of system because one of the first and most relevant elements of AI, from an engineering perspective, is its prediction capabilities.Every time a streaming platform suggests what you can watch next, it’s changing the number of people using the platform or the online selling system. This idea that interaction between human beings and machines can produce behavior is something that could interfere with our quality of life and pursuit of happiness. This is something that needs to be discussed.

Now, the problem is: don’t we have a cognitive right to know if we have a system acting in that way? Let me give you some numbers. When you’re 65, you’re probably taking three different drugs per day. When you reach 68 to 70, you probably have one chronic disease. Chronic diseases depend on how well you stick to therapy. Think about the debate around insulin and diabetes. If you forget to take your medication, your quality of life deteriorates significantly. Imagine using this system to help people stick to their therapy. Is that bad? No, of course not. Or think about using it in the workplace to enhance workplace safety. Is that bad? No, of course not.

But if you apply it to your life choices—your future, where you want to live, your workplace, and things like that—that becomes much more intense. Once again, the tool could become a weapon, or the weapon could become a tool. This is why we have to ask ourselves: do we need something like a cognitive right regarding this? That you are in a relationship with a machine that has the tendency to influence your behavior.

Then you can accept it: “I have diabetes, I need something that helps me stick to insulin. Let’s go.” It’s the same thing that happens with a smartwatch when you have to close the rings. The machine is pushing you to have healthy behavior, and we accept it. Well, right now we have nothing like that framework. Should we think about something in the public space? It’s not a matter of allowing or preventing some kind of technology. It’s a matter of recognizing what it means to be human in an age of such powerful technology—just to give a small example of what you asked me."

Viewpoint: Don’t let America’s copyright crackdown hand China global AI leadership; Grand Forks Herald, July 5, 2025

 Kent Conrad and Saxby Chambliss , Grand Forks Herald; Viewpoint: Don’t let America’s copyright crackdown hand China global AI leadership


[Kip Currier: The assertion by anti-AI regulation proponents, like the former U.S. congressional authors of this think-piece, that requiring AI tech companies to secure permission and pay for AI training data will kill or hobble U.S. AI entrepreneurship is hyperbolic catastrophizing. AI tech companies can license training data from creators who are willing to participate in licensing frameworks. Such frameworks already exist for music copyrights, for example. AI tech companies just don't want to pay for something if they can get it for free.

AI tech companies would never permit users to scrape up, package, and sell their IP content for free. Copyright holders shouldn't be held to a different standard and be required to let tech companies monetize their IP-protected works without permission and compensation.]

Excerpt]

"If these lawsuits succeed, or if Congress radically rewrites the law, it will become nearly impossible for startups, universities or mid-size firms to develop competitive AI tools."

Why the new rulings on AI copyright might actually be good news for publishers; Fast Company, July 9, 2025

PETE PACHAL, Fast Company; Why the new rulings on AI copyright might actually be good news for publishers

"The outcomes of both cases were more mixed than the headlines suggest, and they are also deeply instructive. Far from closing the door on copyright holders, they point to places where litigants might find a key...

Taken together, the three cases point to a clearer path forward for publishers building copyright cases against Big AI:

Focus on outputs instead of inputs: It’s not enough that someone hoovered up your work. To build a solid case, you need to show that what the AI company did with it reproduced it in some form. So far, no court has definitively decided whether AI outputs are meaningfully different enough to count as “transformative” in the eyes of copyright law, but it should be noted that courts have ruled in the past that copyright violation can occur even when small parts of the work are copied—ifthose parts represent the “heart” of the original.

Show market harm: This looks increasingly like the main battle. Now that we have a lot of data on how AI search engines and chatbots—which, to be clear, are outputs—are affecting the online behavior of news consumers, the case that an AI service harms the media market is easier to make than it was a year ago. In addition, the emergence of licensing deals between publishers and AI companies is evidence that there’s market harm by creating outputs without offering such a deal.

Question source legitimacy: Was the content legally acquired or pirated? The Anthropic case opens this up as a possible attack vector for publishers. If they can prove scraping occurred through paywalls—without subscribing first—that could be a violation even absent any outputs."

U.S. Copyright Office Announces Webinar on Copyright Essentials for Writers; U.S. Copyright Office, Webinar: August 6, 2025 1 PM EDT

 U.S. Copyright Office; U.S. Copyright Office Announces Webinar on Copyright Essentials for Writers

"The U.S. Copyright Office invites you to register to attend the third session in our Copyright Essentials webinar series. The Plot Thickens: Copyright Essentials for Writers will take place on August 6 at 1:00 p.m. eastern time. 

In this session, the Copyright Office will discuss what writers should know about copyright. We will cover information for writers of various literary works—from novels and blogs to poetry, cookbooks, textbooks, and more. The session will also review suitable application options and how our Public Information Office can help you along the way. 

Attendees will also learn copyright basics, answers to commonly asked questions, and where to find Copyright Office educational resources.

Speakers:

  • Jessica Chinnadurai, Attorney-Advisor, Office of Public Information and Education
  • Laura Kaiser, Attorney-Advisor, Office of Public Information and Education

Prior Copyright Essentials webinars can be viewed on our website:

The Copyright Office strategic goal of Copyright for All means making the copyright system as understandable and accessible to as many members of the public as possible, through initiatives including education and outreach. Sign up to stay updated about future webinars in this series."

Monday, July 7, 2025

YouTube Pirates Are Cashing In on Hollywood’s Summer Blockbusters; The New York Times, July 5, 2025

 Nico Grant and , The New York Times; YouTube Pirates Are Cashing In on Hollywood’s Summer Blockbusters

"But the company also had cause to be concerned. In the days after the Disney film’s opening, a pirated version of “Lilo & Stitch” proved to be a hit on YouTube, where more than 200,000 people viewed it, potentially costing Disney millions of dollars in additional sales, according to new research from Adalytics, a firm that analyzes advertising campaigns for brands.

The findings of the research shed new light on the copyright issues that once threatened to upend YouTube’s business. They also show how advertisers have unwittingly supported illicit content on YouTube, and they provide rare data about piracy on the platform."


Saturday, July 5, 2025

Two Courts Rule On Generative AI and Fair Use — One Gets It Right; Electronic Frontier Foundation (EFF), June 26, 2025

TORI NOBLE, Electronic Frontier Foundation (EFF); Two Courts Rule On Generative AI and Fair Use — One Gets It Right

 "Gen-AI is spurring the kind of tech panics we’ve seen before; then, as now, thoughtful fair use opinions helped ensure that copyright law served innovation and creativity. Gen-AI does raise a host of other serious concerns about fair labor practices and misinformation, but copyright wasn’t designed to address those problems. Trying to force copyright law to play those roles only hurts important and legal uses of this technology.

In keeping with that tradition, courts deciding fair use in other AI copyright cases should look to Bartz, not Kadrey."

Ousted US copyright chief argues Trump did not have power to remove her; The Register, July 4, 2025

 Lindsay Clark, The Register; Ousted US copyright chief argues Trump did not have power to remove her

"The White House said the power to remove is aligned with the power to appoint. If there is no Librarian of Congress and the president cannot designate an acting librarian, the president's removal authority extends to inferior officers like the register of copyrights, it argued.

Perlmutter was expunged from office a few days after Librarian of Congress Carla Hayden was also shown the door. Hayden was later replaced by deputy attorney general Todd Blanche and Perlmutter by deputy attorney general Paul Perkins.

In the latest filing this week, Perlmutter's legal team said the administration's claim that it had the power to remove her from an office appointed by the Library of Congress employed a "novel constitutional theory" and "sweeping assertions of power."

The Copyright Office is housed in the Library of Congress, and the librarian oversees the Copyright Office head directly, Perlmutter said. Her filing argued that "neither the law nor common sense requires" that the court should "should stand idly by and do nothing while [the Trump administration] wields unprecedented, and unlawful, authority.""

Thursday, July 3, 2025

Cloudflare Sidesteps Copyright Issues, Blocking AI Scrapers By Default; Forbes, July 2, 2025

Emma Woollacott , Forbes; Cloudflare Sidesteps Copyright Issues, Blocking AI Scrapers By Default

"IT service management company Cloudflare is striking back on behalf of content creators, blocking AI scrapers by default.

Web scrapers are bots that crawl the internet, collecting and cataloguing content of all types, and are used by AI firms to collect material that can be used to train their models.

Now, though, Cloudflare is allowing website owners to choose if they want AI crawlers to access their content, and decide how the AI companies can use it. They can opt to allow crawlers for certain purposes—search, for example—but block others. AI companies will have to obtain explicit permission from a website before scraping."