Sunday, July 27, 2025

Storm chaser Reed Timmer sues Kansas company for using his tornado video; Topeka Capital-Journal, July 24, 2025

Tim Hrenchir , Topeka Capital-Journal; Storm chaser Reed Timmer sues Kansas company for using his tornado video

"Celebrity storm chaser Reed Timmer has sued a Kansas roofing and construction company alleging it committed copyright infringement by using video on Instagram that he took during a 2022 tornado at Andover, Kansas."

Saturday, July 26, 2025

AI and copyright – the state of play, post the US AI Action Plan; PetaPixel, July 25, 2025

Chris Middleton , PetaPixel; AI and copyright – the state of play, post the US AI Action Plan


[Kip Currier: This article effectively skewers the ridiculousness and hypocrisy of the assertion of Trump and the wealthiest corporations on the planet that licensing content to fuel AI LLMs is impossible and too onerous. AI companies would never let users make use of their IP without compensation and permission. Yet, these same companies -- and now Trump via his AI Action Plan --  argue that respecting the copyrights of content holders just isn't "doable".] 

[Excerpt]

"The top six most valuable companies on Earth – in history, in fact – are all in AI and tech. Between them, NVIDIA, Microsoft, Apple, Amazon, Alphabet, and Meta already have a market capitalization of $12.9 trillion, roughly equivalent to the value of China's entire economy in 2017-18; or three times the Gross Domestic Product (GDP) of the third largest economy today, Germany, and half that of the US.

Spend trillions of dollars on planet-heating, water-guzzling AI data centers to run the likes of OpenAI's frontier models – systems that (in Trump's view) will be powered by coal? No problem. But license some books when you can scrape millions from known pirate sources? Impossible, it seems.

Whether US courts will agree with that absurd position is unknown."


Friday, July 25, 2025

Mark Cuban says the AI war ‘will get ugly’ and intellectual property ‘is KING’ in the AI world; Fortune, July 22, 2025

SYDNEY LAKE, Fortune; Mark Cuban says the AI war ‘will get ugly’ and intellectual property ‘is KING’ in the AI world

"Major tech companies are battling for AI dominance, pouring tens of billions into infrastructure and offering sky-high compensation packages. Billionaire investor Mark Cuban notes this new phase will see firms locking down valuable AI innovations and expertise rather than sharing them."

Trump’s Comments Undermine AI Action Plan, Threaten Copyright; Publishers Weekly, July 23, 2025

Ed Nawotka  , Publishers Weekly; Trump’s Comments Undermine AI Action Plan, Threaten Copyright

"Senate bill proposes 'opt-in' legislation

Trump's comments come on the heels of the introduction, by U.S. senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), of the AI Accountability and Personal Data Protection Act this past Monday following a hearing last week on AI companies' copyright infringement. The bipartisan legislation aims to hold AI firms liable for using copyrighted works or personal data without acquiring explicit consent to train AI models. It would empower individuals—including writers, artists, and content creators—to sue companies in federal court if their data or copyrighted works are used without consent. It also supports class action lawsuits and advocates for violators to pay robust penalties.

"AI companies are robbing the American people blind while leaving artists, writers, and other creators with zero recourse," said Hawley. "It’s time for Congress to give the American worker their day in court to protect their personal data and creative works. My bipartisan legislation would finally empower working Americans who now find their livelihoods in the crosshairs of Big Tech’s lawlessness."

"This bill embodies a bipartisan consensus that AI safeguards are urgent—because the technology is moving at accelerating speed, and so are dangers to privacy," added Blumenthal. "Enforceable rules can put consumers back in control of their data, and help bar abuses. Tech companies must be held accountable—and liable legally—when they breach consumer privacy, collecting, monetizing or sharing personal information without express consent. Consumers must be given rights and remedies—and legal tools to make them real—not relying on government enforcement alone."

Trump’s AI agenda hands Silicon Valley the win—while ethics, safety, and ‘woke AI’ get left behind; Fortune, July 24, 2025

 SHARON GOLDMAN, Fortune; Trump’s AI agenda hands Silicon Valley the win—while ethics, safety, and ‘woke AI’ get left behind

"For the “accelerationists”—those who believe the rapid development and deployment of artificial intelligence should be pursued as quickly as possible—innovation, scale, and speed are everything. Over-caution and regulation? Ill-conceived barriers that will actually cause more harm than good. They argue that faster progress will unlock massive economic growth, scientific breakthroughs, and national advantage. And if superintelligence is inevitable, they say, the U.S. had better get there first—before rivals like China’s authoritarian regime.

AI ethics and safety has been sidelined

This worldview, articulated by Marc Andreessen in his 2023 blog post, has now almost entirely displaced the diverse coalition of people who worked on AI ethics and safety during the Biden Administration—from mainstream policy experts focused on algorithmic fairness and accountability, to the safety researchers in Silicon Valley who warn of existential risks. While they often disagreed on priorities and tone, both camps shared the belief that AI needed thoughtful guardrails. Today, they find themselves largely out of step with an agenda that prizes speed, deregulation, and dominance.

Whether these groups can claw their way back to the table is still an open question. The mainstream ethics folks—with roots in civil rights, privacy, and democratic governance—may still have influence at the margins, or through international efforts. The existential risk researchers, once tightly linked to labs like OpenAI and Anthropic, still hold sway in academic and philanthropic circles. But in today’s environment—where speed, scale, and geopolitical muscle set the tone—both camps face an uphill climb. If they’re going to make a comeback, I get the feeling it won’t be through philosophical arguments. More likely, it would be because something goes wrong—and the public pushes back."

Thursday, July 24, 2025

President Trump’s AI Action Plan Misses the Mark, Calls for Action Without Vision; Public Knowledge, July 23, 2025

 Shiva Stella, Public Knowledge; President Trump’s AI Action Plan Misses the Mark, Calls for Action Without Vision

"Today, the Trump administration announced its artificial intelligence action plan designed to “accelerate AI innovation” – by stepping aside and giving technology companies free rein over how the technology develops. The plan removes state and federal regulatory requirements, eliminates protections against bias and discrimination, fails to address competition concerns, and ignores climate and environmental risks.

The plan does continue to advance important work on developing an AI evaluation ecosystem and supporting critical research on AI interpretability, control, security risks, and advancing the fundamental science of AI. However, these modest steps throw into stark contrast the failure to meaningfully invest in America’s AI future.

Public Knowledge argues that real AI innovation will require real leadership from our democratically elected leaders, investments and actions that break down monopolies and corporate control, and public trust earned by creating AI systems that are safe, fair, and subject to the rule of law...

The following can be attributed to Nicholas Garcia, Senior Policy Counsel at Public Knowledge: 

“This plan is action without vision or direction. Cutting regulations and eliminating protections is, by itself, not a plan for innovation and competition in AI – it is a handout to already-entrenched, powerful tech companies. The real constraints on AI innovation are well-known: access to training data, compute power, and research talent. This plan’s solutions in those areas are severely lacking. At its heart, the plan is starkly divided between political posturing and serious science.

“It is clear that some of the experts’ messages from the public comments reached the White House. Continuing to develop an AI evaluation ecosystem; investing in research on AI interpretability and control; promoting the development and use of open-source and open-weights models; and claiming an international leadership position on evaluating AI national security risks are all critically important policy pursuits. 

“President Trump also spoke strongly in his speech tonight about the need to protect the rights to read and learn. He is absolutely correct about the need to protect those fundamental rights for everyone, including for AI training. Unfortunately, there is no mention of how to protect these rights or address questions about copyright in the AI action plan. 

“Instead of focusing more deeply on research or promoting competition, the AI action plan continues the Trump administration’s attack on diversity and equality, on the green energy solutions needed to both protect our planet and power AI, and on the very institutions of science and learning that are necessary to secure the promise of AI. This demonstrates how the vindictive political project of ‘preventing woke’ directly clashes with achieving actual leadership in AI.

“Ultimately, the plan’s soaring and optimistic language of AI acceleration is undermined by a failure to embrace an affirmative vision of how AI will improve the lives of everyday Americans and how to actually get there. We can only hope that these small steps in the right direction on evaluations, research, and open-source – along with the administration’s remarks on copyright – means that there is more to come to ensure that the American people are the winners of the AI race. As it stands right now, this plan fails to meet the challenges of this pivotal moment.” 

You may view our recent blog post, “Hopes and Fears for President Trump’s AI Action Plan,” for more information."

Donald Trump Is Fairy-Godmothering AI; The Atlantic, July 23, 2025

 Matteo Wong , The Atlantic; Donald Trump Is Fairy-Godmothering AI

"In a sense, the action plan is a bet. AI is already changing a number of industries, including software engineering, and a number of scientific disciplines. Should AI end up producing incredible prosperity and new scientific discoveries, then the AI Action Plan may well get America there faster simply by removing any roadblocks and regulations, however sensible, that would slow the companies down. But should the technology prove to be a bubble—AI products remain error-prone, extremely expensive to build, and unproven in many business applications—the Trump administration is more rapidly pushing us toward the bust. Either way, the nation is in Silicon Valley’s hands...

Once the red tape is gone, the Trump administration wants to create a “dynamic, ‘try-first’ culture for AI across American industry.” In other words, build and test out AI products first, and then determine if those products are actually helpful—or if they pose any risks.

Trump gestured toward other concessions to the AI industry in his speech. He specifically targeted intellectual-property laws, arguing that training AI models on copyrighted books and articles does not infringe upon copyright because the chatbots, like people, are simply learning from the content. This has been a major conflict in recent years, with more than 40 related lawsuits filed against AI companies since 2022. (The Atlantic is suing the AI company Cohere, for example.) If courts were to decide that training AI models with copyrighted material is against the law, it would be a major setback for AI companies. In their official recommendations for the AI Action Plan, OpenAI, Microsoft, and Google all requested a copyright exception, known as “fair use,” for AI training. Based on his statements, Trump appears to strongly agree with this position, although the AI Action Plan itself does not reference copyright and AI training.

Also sprinkled throughout the AI Action Plan are gestures toward some MAGA priorities. Notably, the policy states that the government will contract with only AI companies whose models are “free from top-down ideological bias”—a reference to Sacks’s crusade against “woke” AI—and that a federal AI-risk-management framework should “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” Trump signed a third executive order today that, in his words, will eliminate “woke, Marxist lunacy” from AI models...

Looming over the White House’s AI agenda is the threat of Chinese technology getting ahead. The AI Action Plan repeatedly references the importance of staying ahead of Chinese AI firms, as did the president’s speech: “We will not allow any foreign nation to beat us; our nation will not live in a planet controlled by the algorithms of the adversaries,” Trump declared...

But whatever happens on the international stage, hundreds of millions of Americans will feel more and more of generative AI’s influence—on salaries and schools, air quality and electricity costs, federal services and doctor’s offices. AI companies have been granted a good chunk of their wish list; if anything, the industry is being told that it’s not moving fast enough. Silicon Valley has been given permission to accelerate, and we’re all along for the ride."

Donald Trump Says AI Companies Can’t Be Expected To Pay For All Copyrighted Content Used In Their Training Models: “Not Do-Able”; Deadline, July 23, 2025

 Ted JohnsonTom Tapp, Deadline; Donald Trump Says AI Companies Can’t Be Expected To Pay For All Copyrighted Content Used In Their Training Models: “Not Do-Able”

 

[Kip Currier: Don't be fooled by the flimflam rhetoric in Trump's AI Action Plan unveiled yesterday (July 23, 2025). Where Trump's AI Action Plan says “We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas", it's actually the exact opposite: the Trump plan is censorious and will "cancel out" truth (e.g. on climate science, misinformation and disinformation, etc.) in Orwellian fashion.]


[Excerpt]

"The plan is a contrast to Trump’s predecessor, Joe Biden, who focused on the government’s role in ensuring that the technology was safe.

The Trump White House plan also recommends updating federal procurement guidelines “to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.” Also recommended is revising the National Institute of Standards and Technology AI Risk Management Framework to remove references to misinformation, DEI and climate change.

“We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas,” the plan says."

Wednesday, July 23, 2025

Trump derides copyright and state rules in AI Action Plan launch; Politico, July 23, 2025

 MOHAR CHATTERJEE , Politico; Trump derides copyright and state rules in AI Action Plan launch

"President Donald Trump criticized copyright enforcement efforts and state-level AI regulations Wednesday as he launched the White House’s AI Action Plan on a mission to dominate the industry.

In remarks delivered at a “Winning the AI Race” summit hosted by the All-In Podcast and the Hill and Valley Forum in Washington, Trump said stringent copyright enforcement was unrealistic for the AI industry and would kneecap U.S. companies trying to compete globally, particularly against China.

“You can’t be expected to have a successful AI program when every single article, book or anything else that you’ve read or studied, you’re supposed to pay for,” he said. “You just can’t do it because it’s not doable. ... China’s not doing it.”

Trump’s comments were a riff as his 28-page AI Action Plan did not wade into copyright and administration officials told reporters the issue should be left to the courts to decide.

Trump also signed three executive orders. One will fast track federal permitting, streamline reviews and “do everything possible to expedite construction of all major AI infrastructure projects,” Trump said. Another expands American exports of AI hardware and software. A third order bans the federal government from procuring AI technology “that has been infused with partisan bias or ideological agendas,” as Trump put it...

Trump echoed tech companies’ complaints about state AI laws creating a patchwork of regulation. “You can’t have one state holding you up,” he said. “We need one common sense federal standard that supersedes all states, supersedes everybody.”"

Trump has fired the head of the Library of Congress, but the 225-year-old institution remains a ‘library for all’ – so far; The Conversation, July 23, 2025

 , Associate Professor of Information Science, Drexel University , The Conversation; Trump has fired the head of the Library of Congress, but the 225-year-old institution remains a ‘library for all’ – so far

"A library for all

Following Hayden’s dismissal, Trump appointed Deputy Attorney General Todd Blanche, his former personal lawyer, as acting librarian of Congress. 

Hayden has contended that her dismissal, which occurred alongside other firings of top civil servants, including the national archivist, represents a broad threat to people’s right to easily access free information. 

Democracies are not to be taken for granted,” Hayden said in June. She explained in an interview with CBS that she never had a problem with a presidential administration and is not sure why she was dismissed. 

“And the institutions that support democracy should not be taken for granted,” Hayden added. 

In her final annual report as librarian, Hayden characterized the institution as “truly, a library for all.” So far, even without her leadership, it remains just that."

AI chatbots remain overconfident -- even when they’re wrong; EurekAlert!, July 22, 2025

  CARNEGIE MELLON UNIVERSITY, EurekAlert!; AI chatbots remain overconfident -- even when they’re wrong

"Artificial intelligence chatbots are everywhere these days, from smartphone apps and customer service portals to online search engines. But what happens when these handy tools overestimate their own abilities? 

Researchers asked both human participants and four large language models (LLMs) how confident they felt in their ability to answer trivia questions, predict the outcomes of NFL games or Academy Award ceremonies, or play a Pictionary-like image identification game. Both the people and the LLMs tended to be overconfident about how they would hypothetically perform. Interestingly, they also answered questions or identified images with relatively similar success rates.

However, when the participants and LLMs were asked retroactively how well they thought they did, only the humans appeared able to adjust expectations, according to a study published today in the journal Memory & Cognition.

“Say the people told us they were going to get 18 questions right, and they ended up getting 15 questions right. Typically, their estimate afterwards would be something like 16 correct answers,” said Trent Cash, who recently completed a joint Ph.D. at Carnegie Mellon University in the departments of Social Decision Science and Psychology. “So, they’d still be a little bit overconfident, but not as overconfident.”

“The LLMs did not do that,” said Cash, who was lead author of the study. “They tended, if anything, to get more overconfident, even when they didn’t do so well on the task.”

The world of AI is changing rapidly each day, which makes drawing general conclusions about its applications challenging, Cash acknowledged. However, one strength of the study was that the data was collected over the course of two years, which meant using continuously updated versions of the LLMs known as ChatGPT, Bard/Gemini, Sonnet and Haiku. This means that AI overconfidence was detectable across different models over time.

“When an AI says something that seems a bit fishy, users may not be as skeptical as they should be because the AI asserts the answer with confidence, even when that confidence is unwarranted,” said Danny Oppenheimer, a professor in CMU’s Department of Social and Decision Sciences and coauthor of the study."

Wave of copyright lawsuits hit AI companies like Cambridge-based Suno; WBUR, July 23, 2025

 

 WBUR; Wave of copyright lawsuits hit AI companies like Cambridge-based Suno

"Suno, a Cambridge company that generates AI music, faces multiple lawsuits alleging it illegally trained its model on copyrighted work. Peter Karol of Suffolk Law School and Bhamati Viswanathan of Columbia University Law School's Kernochan Center for Law, Media, and the Arts join WBUR's Morning Edition to explain how the suits against Suno fit into a broader legal battle over the future of creative work.

This segment aired on July 23, 2025. Audio will be available soon."

Tuesday, July 22, 2025

Commentary: A win-win-win path for AI in America; The Post & Courier, July 22, 2025

Keith Kupferschmid, The Post & Courier; Commentary: A win-win-win path for AI in America

"Contrary to claims that these AI training deals are impossible to make at scale, a robust free market is already emerging in which hundreds (if not thousands) of licensed deals between AI companies and copyright owners have been reached. New research shows it is possible to create fully licensed data sets for AI.

No wonder one federal judge recently called claims that licensing is impractical “ridiculous,” given the billions at stake: “If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders.” Just like AI companies don’t dispute that they have to pay for energy, infrastructure, coding teams and the other inputs their operations require, they need to pay for creative works as well.

America’s example to the world is a free-market economy based on the rule of law, property rights and freedom to contract — so, let the market innovate solutions to these new (but not so new) licensing challenges. Let’s construct a pro-innovation, pro-worker approach that replaces the false choice of the AI alarmists with a positive, pro-America pathway to leadership on AI."

Senators Introduce Bill To Restrict AI Companies’ Unauthorized Use Of Copyrighted Works For Training Models; Deadline, July 21, 2025

Ted Johnson , Deadline; Senators Introduce Bill To Restrict AI Companies’ Unauthorized Use Of Copyrighted Works For Training Models

"Sen. Josh Hawley (R-MO) and Sen. Richard Blumenthal (D-CT) introduced legislation on Monday that would restrict AI companies from using copyrighted material in their training models without the consent of the individual owner.

The AI Accountability and Personal Data Protection Act also would allow individuals to sue companies that uses their personal data or copyrighted works without their “express, prior consent.”

The bill addresses a raging debate between tech and content owners, one that has already led to extensive litigation. Companies like OpenAI have argued that the use of copyrighted materials in training models is a fair use, while figures including John Grisham and George R.R. Martin have challenged that notion."

Sunday, July 20, 2025

Judge dismisses Trump’s copyright lawsuit against Bob Woodward and audiobook publisher; CNN, July 18, 2025

, CNN ; Judge dismisses Trump’s copyright lawsuit against Bob Woodward and audiobook publisher

"President Trump’s lawsuit against journalist Bob Woodward and his publishing house Simon & Schuster was dismissed by a federal judge on Friday, the same day that Trump filed a new and unrelated suit against the Wall Street Journal.

US District Judge Paul Gardephe of the Southern District of New York gave Trump’s legal team a month to file another amended complaint. But for now, he dismissed the current iteration and said “it appears unlikely” that Trump “can adequately plead a plausible copyright interest in ‘The Trump Tapes.’”

AI guzzled millions of books without permission. Authors are fighting back.; The Washington Post, July 19, 2025

  , The Washington Post; AI guzzled millions of books without permission. Authors are fighting back.


[Kip Currier: I've written this before on this blog and I'll say it again: technology companies would never allow anyone to freely vacuum up their content and use it without permission or compensation. Period. Full Stop.]


[Excerpt]

"Baldacci is among a group of authors suing OpenAI and Microsoft over the companies’ use of their work to train the AI software behind tools such as ChatGPT and Copilot without permission or payment — one of more than 40 lawsuits against AI companies advancing through the nation’s courts. He and other authors this week appealed to Congress for help standing up to what they see as an assault by Big Tech on their profession and the soul of literature.

They found sympathetic ears at a Senate subcommittee hearing Wednesday, where lawmakers expressed outrage at the technology industry’s practices. Their cause gained further momentum Thursday when a federal judge granted class-action status to another group of authors who allege that the AI firm Anthropic pirated their books.

“I see it as one of the moral issues of our time with respect to technology,” Ralph Eubanks, an author and University of Mississippi professor who is president of the Authors Guild, said in a phone interview. “Sometimes it keeps me up at night.”

Lawsuits have revealed that some AI companies had used legally dubious “torrent” sites to download millions of digitized books without having to pay for them."

Judge Rules Class Action Suit Against Anthropic Can Proceed; Publishers Weekly, July 18, 2025

Jim Milliot , Publishers Weekly; Judge Rules Class Action Suit Against Anthropic Can Proceed

"In a major victory for authors, U.S. District Judge William Alsup ruled July 17 that three writers suing Anthropic for copyright infringement can represent all other authors whose books the AI company allegedly pirated to train its AI model as part of a class action lawsuit.

In late June, Alsup of the Northern District of California, ruled in Bartz v. Anthropic that the AI company's training of its Claude LLMs on authors' works was "exceedingly transformative," and therefore protected by fair use. However, Alsup also determined that the company's practice of downloading pirated books from sites including Books3, Library Genesis, and Pirate Library Mirror (PiLiMi) to build a permanent digital library was not covered by fair use.

Alsup’s most recent ruling follows an amended complaint from the authors looking to certify classes of copyright owners in a “Pirated Books Class” and in a “Scanned Books Class.” In his decision, Alsup certified only a LibGen and PiLiMi Pirated Books Class, writing that “this class is limited to actual or beneficial owners of timely registered copyrights in ISBN/ASIN-bearing books downloaded by Anthropic from these two pirate libraries.”

Alsup stressed that “the class is not limited to authors or author-like entities,” explaining that “a key point is to cover everyone who owns the specific copyright interest in play, the right to make copies, either as the actual or as the beneficial owner.” Later in his decision, Alsup makes it clear who is covered by the ruling: “A beneficial owner...is someone like an author who receives royalties from any publisher’s revenues or recoveries from the right to make copies. Yes, the legal owner might be the publisher but the author has a definite stake in the royalties, so the author has standing to sue. And, each stands to benefit from the copyright enforcement at the core of our case however they then divide the benefit.”"

US authors suing Anthropic can band together in copyright class action, judge rules; Reuters, July 17, 2025

 , Reuters; US authors suing Anthropic can band together in copyright class action, judge rules

"A California federal judge ruled on Thursday that three authors suing artificial intelligence startup Anthropic for copyright infringement can represent writers nationwide whose books Anthropic allegedly pirated to train its AI system.

U.S. District Judge William Alsup said the authors can bring a class action on behalf of all U.S. writers whose works Anthropic allegedly downloaded from "pirate libraries" LibGen and PiLiMi to create a repository of millions of books in 2021 and 2022."

Thursday, July 17, 2025

Libraries Pay More for E-Books. Some States Want to Change That.; The New York Times, July 16, 2025

Erik Ofgang, The New York Times; Libraries Pay More for E-Books. Some States Want to Change That.

Proposed legislation would pressure publishers to adjust borrowing limits and find other ways to widen access. 

"Librarians complain that publishers charge so much to license e-books that it’s busting library budgets and frustrating efforts to provide equitable access to reading materials. Big publishers and many authors say that e-book library access undermines their already struggling business models. Smaller presses are split."

What Book Authors’ AI Copyright Court Losses Mean for the Music Business; Billboard, 7/14/25

RACHEL SCHARF, Billboard ; What Book Authors’ AI Copyright Court Losses Mean for the Music Business

While the first copyright rulings have come out on the side of AI platforms, this is hardly a death knell for the music giants' lawsuits against Suno, Udio and Anthropic, legal experts say. 

The Art (and Legality) of Imitation: Navigating the Murky Waters of Fair Use in AI Training The National Law Review, July 16, 2025

Sarah C. ReasonerAshley N. HigginsonAnita C. MarinelliKimberly A. Berger of Miller Canfield   - Miller Canfield Resources, The National Law Review; The Art (and Legality) of Imitation: Navigating the Murky Waters of Fair Use in AI Training 

"The legal landscape for artificial intelligence is still developing, and no outcome can yet be predicted with any sort of accuracy. While some courts appear poised to accept AI model training as transformative, other courts do not. As AI technology continues to advance, the legal system must adapt to address the unique challenges it presents. Meanwhile, businesses and creators navigating this uncertain terrain should stay informed about legal developments and consider proactive measures to mitigate risks. As we await further rulings and potential legislative action, one thing is clear: the conversation around AI and existing intellectual property protection is just beginning."

Wednesday, July 16, 2025

Musicians brace for impact as Senate vote on public radio looms; The Washington Post, July 15, 2025

 , The Washington Post; Musicians brace for impact as Senate vote on public radio looms

"For the more than 1,000 public radio stations that play independent music, Boilen says the bill is an existential threat...

“All stations would be in trouble of not being able to play music,” NPR president and CEO Katherine Maher said. The CPB spends nearly $20 million on licensing most years, covering an expense Maher said would be impossible for most stations to afford. “Regardless of how big you are, even the largest station in the NPR network and in public radio still operates on a budget of less than $100 million a year.”

Licensing isn’t the only thing threatened by the rescission bill, which also retracts funding from foreign aid programs such as global AIDS prevention and other public media such as PBS."

Can Gen AI and Copyright Coexist?; Harvard Business Review, July 16, 2025

 and , Harvard Business Review ; Can Gen AI and Copyright Coexist?

"We’re experts in the study of digital transformation and have given this issue a lot of thought. We recently served, for example, on a roundtable of 10 economists convened by the U.S. Copyright Office to study the implications of gen AI on copyright policy. We recognize that the two decisions are far from the last word on this topic; both will no doubt be appealed to the Ninth Circuit and then subsequently to the Supreme Court. But in the meantime, we believe there are already many lessons to be learned from these decisions about the implications of gen AI for business—lessons that will be useful for leaders in both the creative industries and gen AI companies."

The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies; Task & Purpose, July 14, 2025

  , Task & Purpose; The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies

"The Pentagon announced Monday it is going to spend almost $1 billion on “agentic AI workflows” from four “frontier AI” companies, including Elon Musk’s xAI, whose flagship Grok appeared to still be declaring itself “MechaHitler” as late as Monday afternoon.

In a press release, the Defense Department’s Chief Digital and Artificial Intelligence Office — or CDAO — said it will cut checks of up to $200 million each to tech giants Anthropic, Google, OpenAI and Musk’s xAI to work on:

  • “critical national security challenges;”
  • “joint mission essential tasks in our warfighting domain;”
  • “DoD use cases.”

The release did not expand on what any of that means or how AI might help. Task & Purpose reached out to the Pentagon for details on what these AI agents may soon be doing and asked specifically if the contracts would include control of live weapons systems or classified information."

Tuesday, July 15, 2025

Research Guides in Focus – Intellectual Property Law: A Beginner’s Guide; Library of Congress, July 15, 2025

 Sarah Friedman, Library of Congress; Research Guides in Focus – Intellectual Property Law: A Beginner’s Guide

"The Law Library of Congress is pleased to announce the publication of the new research guide, Intellectual Property Law: A Beginner’s Guide. This guide provides an overview of resources for researching patent, copyright, and trademark law.

The guide begins with a general explanation of intellectual property, followed by print and online resources for further learning about the subject. There are also tabs for resources specific to patent, copyright, and trademark law. For each area of intellectual property law, we have gathered secondary sources, statutes, regulations, treaties, databases for searching records, case law sources, lists of organizations that can assist with applications for protection, and other online resources.

We hope that this guide will be a valuable resource for researchers seeking to learn more about intellectual property laws, researchers searching for existing patent, copyright, and trademark records, and researchers who want to learn about the processes to apply for protection for their intellectual property. As always, we encourage researchers who have further questions, comments, or feedback about this guide to reach out to us through Ask a Librarian."

Monday, July 14, 2025

Popular Rock Band Demands Trump's DHS Take Down ICE Video Over Copyright Violation: 'And Go F–k Yourselves': "It's obvious that you don't respect Copyright Law"; Latin Times, July 14, 2025

, Latin Times; Popular Rock Band Demands Trump's DHS Take Down ICE Video Over Copyright Violation: 'And Go F–k Yourselves'

"It's obvious that you don't respect Copyright Law"


"The rock band Black Rebel Motorcycle Club (BRMC) is demanding that the US Department of Homeland Security (DHS) remove a recent video that used their recording of "God's Gonna Cut You Down" without permission.

The band made their disapproval of the DHS very clear, accusing the agency of violating not only copyright law, but fundamental constitutional values.

"It's obvious that you don't respect Copyright Law and Artist Rights any more than you respect Habeas Corpus and Due Process rights," the band wrote. "Not to mention the separation of Church and State per the US Constitution."

"For the record, we hereby order @dhsgov to cease and desist the use of our recording and demand that you immediately pull down your video," the statement continued.

"Oh, and go f–k yourselves," they concluded."

Friday, July 11, 2025

AI must have ethical management, regulation protecting human person, Pope Leo says; The Catholic Register, July 11, 2025

 Carol Glatz , The Catholic Register; AI must have ethical management, regulation protecting human person, Pope Leo says

"Pope Leo XIV urged global leaders and experts to establish a network for the governance of AI and to seek ethical clarity regarding its use.

Artificial intelligence "requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency," Cardinal Pietro Parolin, Vatican secretary of state, wrote in a message sent on the pope's behalf.

The message was read aloud by Archbishop Ettore Balestrero, the Vatican representative to U.N. agencies in Geneva, at the AI for Good Summit 2025 being held July 8-11 in Geneva. The Vatican released a copy of the message July 10."

Join Our Livestream: Inside the AI Copyright Battles; Wired, July 11, 2025

Reece Rogers Wired ; Join Our Livestream: Inside the AI Copyright Battles

"WHAT'S GOING ON right now with the copyright battles over artificial intelligence? Many lawsuits regarding generative AI’s training materials were initially filed back in 2023, with decisions just now starting to trickle out. Whether it’s Midjourney generating videos of Disney characters, like Wall-E brandishing a gun, or an exit interview with a top AI lawyer as he left Meta, WIRED senior writer Kate Knibbs has been following this fight for years—and she’s ready to answer your questions.

Bring all your burning questions about the AI copyright battles to WIRED’s next, subscriber-only livestream scheduled for July 16 at 12pm ET / 9am PT, hosted by Reece Rogers with Kate Knibbs. The event will be streamed right here. For subscribers who are not able to join, a replay of the livestream will be available after the event."

Thursday, July 10, 2025

EU's AI code of practice for companies to focus on copyright, safety; Reuters, July 10, 2025

 , Reuters ; EU's AI code of practice for companies to focus on copyright, safety

"The European Commission on Thursday unveiled a draft code of practice aimed at helping firms comply with the European Union's artificial intelligence rules and focused on copyright-protected content safeguards and measures to mitigate systemic risks.

Signing up to the code, which was drawn up by 13 independent experts, is voluntary, but companies that decline to do so will not benefit from the legal certainty provided to a signatory.

The code is part of the AI rule book, which will come into effect in a staggered manner and will apply to Google owner Alphabet, Facebook owner Meta, OpenAI, Anthropic, Mistral and other companies."

EU AI Act at the Crossroads: GPAI Rules, AI Literacy Guidance and Potential Delays; JD Supra, July 8, 2025

 Mark BoothSteven Farmer, Scott Morton , JD Supra; EU AI Act at the Crossroads: GPAI Rules, AI Literacy Guidance and Potential Delays

"The EU AI Act (AI Act), effective since February 2025, introduces a risk-based regulatory framework for AI systems and a parallel regime for general-purpose AI (GPAI) models. It imposes obligations on various actors, including providers, deployers, importers and manufacturers, and requires that organizations ensure an appropriate level of AI literacy among staff. The AI Act also prohibits “unacceptable risk” AI use cases and imposes rigorous requirements on “high-risk” systems. For a comprehensive overview of the AI Act, see our earlier client alert.

As of mid-2025, the implementation landscape is evolving. This update takes stock of where things stand, focusing on: (i) new guidance on the AI literacy obligations for providers and deployers; (ii) the status of the developing a General-Purpose AI Code of Practice and its implications; and (iii) the prospect of delayed enforcement of some of the AI Act’s key provisions."

Microsoft Pledges $4 Billion Toward A.I. Education; The New York Times, July 9, 2025

  , The New York Times; Microsoft Pledges $4 Billion Toward A.I. Education


[Kip Currier: Not one mention of "ethics" or "AI ethics" in this New York Times article.

So, I sent an email to the reporter today (7/10/25):

Dear Natasha Singer,

I was surprised, and actually disconcerted, to not see any mention of "ethics" and "AI ethics" concepts in your article "Microsoft Pledges $4 Billion Toward A.I. Education". Given well-documented concerns of the vital need for ethical guidelines and frameworks vis-a-vis AI by a wide-range of stakeholders (e.g. religious leaders/Rome Call for AI Ethics, the U.N. AI Advisory Body, academics, etc.), I would have expected your reporting to at least have mentioned potential ethical considerations about this Microsoft funding plan, which carries such significant implications for education and societies.

Best wishes,

Kip Currier]

 

[Excerpt]

"Microsoft said on Wednesday that it planned to give more than $4 billion in cash and technology services to train millions of people to use artificial intelligence, amid an intensifying Silicon Valley crusade to embed chatbots into classrooms.

Microsoft, the maker of the Copilot chatbot, said the resources would go to schools, community colleges, technical colleges and nonprofits. The company is also starting a new training program, Microsoft Elevate Academy, to “deliver A.I. education and skilling at scale” and help 20 million people earn certificates in A.I.

“Microsoft will serve as an advocate to ensure that students in every school across the country have access to A.I. education,” Brad Smith, the president of Microsoft, said in an interview on Sunday.

Microsoft did not immediately specify how much of the more than $4 billion the company planned to dispense as grants and how much of it would be in the form of Microsoft A.I. services and cloud computing credits.

The announcement comes as tech companies are racing to train millions of teachers and students on their new A.I. tools. Even so, researchers say it is too soon to tell whether the classroom chatbots will end up improving educational outcomes or eroding important skills like critical thinking.

On Tuesday, the American Federation of Teachers, a union representing 1.8 million members, said it was setting up a national A.I. training center for educators, with $23 million in funding from Microsoft and two other chatbot makers, OpenAI and Anthropic."