Thursday, July 17, 2025

Libraries Pay More for E-Books. Some States Want to Change That.; The New York Times, July 16, 2025

Erik Ofgang, The New York Times; Libraries Pay More for E-Books. Some States Want to Change That.

Proposed legislation would pressure publishers to adjust borrowing limits and find other ways to widen access. 

"Librarians complain that publishers charge so much to license e-books that it’s busting library budgets and frustrating efforts to provide equitable access to reading materials. Big publishers and many authors say that e-book library access undermines their already struggling business models. Smaller presses are split."

What Book Authors’ AI Copyright Court Losses Mean for the Music Business; Billboard, 7/14/25

RACHEL SCHARF, Billboard ; What Book Authors’ AI Copyright Court Losses Mean for the Music Business

While the first copyright rulings have come out on the side of AI platforms, this is hardly a death knell for the music giants' lawsuits against Suno, Udio and Anthropic, legal experts say. 

The Art (and Legality) of Imitation: Navigating the Murky Waters of Fair Use in AI Training The National Law Review, July 16, 2025

Sarah C. ReasonerAshley N. HigginsonAnita C. MarinelliKimberly A. Berger of Miller Canfield   - Miller Canfield Resources, The National Law Review; The Art (and Legality) of Imitation: Navigating the Murky Waters of Fair Use in AI Training 

"The legal landscape for artificial intelligence is still developing, and no outcome can yet be predicted with any sort of accuracy. While some courts appear poised to accept AI model training as transformative, other courts do not. As AI technology continues to advance, the legal system must adapt to address the unique challenges it presents. Meanwhile, businesses and creators navigating this uncertain terrain should stay informed about legal developments and consider proactive measures to mitigate risks. As we await further rulings and potential legislative action, one thing is clear: the conversation around AI and existing intellectual property protection is just beginning."

Wednesday, July 16, 2025

Musicians brace for impact as Senate vote on public radio looms; The Washington Post, July 15, 2025

 , The Washington Post; Musicians brace for impact as Senate vote on public radio looms

"For the more than 1,000 public radio stations that play independent music, Boilen says the bill is an existential threat...

“All stations would be in trouble of not being able to play music,” NPR president and CEO Katherine Maher said. The CPB spends nearly $20 million on licensing most years, covering an expense Maher said would be impossible for most stations to afford. “Regardless of how big you are, even the largest station in the NPR network and in public radio still operates on a budget of less than $100 million a year.”

Licensing isn’t the only thing threatened by the rescission bill, which also retracts funding from foreign aid programs such as global AIDS prevention and other public media such as PBS."

Can Gen AI and Copyright Coexist?; Harvard Business Review, July 16, 2025

 and , Harvard Business Review ; Can Gen AI and Copyright Coexist?

"We’re experts in the study of digital transformation and have given this issue a lot of thought. We recently served, for example, on a roundtable of 10 economists convened by the U.S. Copyright Office to study the implications of gen AI on copyright policy. We recognize that the two decisions are far from the last word on this topic; both will no doubt be appealed to the Ninth Circuit and then subsequently to the Supreme Court. But in the meantime, we believe there are already many lessons to be learned from these decisions about the implications of gen AI for business—lessons that will be useful for leaders in both the creative industries and gen AI companies."

The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies; Task & Purpose, July 14, 2025

  , Task & Purpose; The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies

"The Pentagon announced Monday it is going to spend almost $1 billion on “agentic AI workflows” from four “frontier AI” companies, including Elon Musk’s xAI, whose flagship Grok appeared to still be declaring itself “MechaHitler” as late as Monday afternoon.

In a press release, the Defense Department’s Chief Digital and Artificial Intelligence Office — or CDAO — said it will cut checks of up to $200 million each to tech giants Anthropic, Google, OpenAI and Musk’s xAI to work on:

  • “critical national security challenges;”
  • “joint mission essential tasks in our warfighting domain;”
  • “DoD use cases.”

The release did not expand on what any of that means or how AI might help. Task & Purpose reached out to the Pentagon for details on what these AI agents may soon be doing and asked specifically if the contracts would include control of live weapons systems or classified information."

Tuesday, July 15, 2025

Research Guides in Focus – Intellectual Property Law: A Beginner’s Guide; Library of Congress, July 15, 2025

 Sarah Friedman, Library of Congress; Research Guides in Focus – Intellectual Property Law: A Beginner’s Guide

"The Law Library of Congress is pleased to announce the publication of the new research guide, Intellectual Property Law: A Beginner’s Guide. This guide provides an overview of resources for researching patent, copyright, and trademark law.

The guide begins with a general explanation of intellectual property, followed by print and online resources for further learning about the subject. There are also tabs for resources specific to patent, copyright, and trademark law. For each area of intellectual property law, we have gathered secondary sources, statutes, regulations, treaties, databases for searching records, case law sources, lists of organizations that can assist with applications for protection, and other online resources.

We hope that this guide will be a valuable resource for researchers seeking to learn more about intellectual property laws, researchers searching for existing patent, copyright, and trademark records, and researchers who want to learn about the processes to apply for protection for their intellectual property. As always, we encourage researchers who have further questions, comments, or feedback about this guide to reach out to us through Ask a Librarian."

Monday, July 14, 2025

Popular Rock Band Demands Trump's DHS Take Down ICE Video Over Copyright Violation: 'And Go F–k Yourselves': "It's obvious that you don't respect Copyright Law"; Latin Times, July 14, 2025

, Latin Times; Popular Rock Band Demands Trump's DHS Take Down ICE Video Over Copyright Violation: 'And Go F–k Yourselves'

"It's obvious that you don't respect Copyright Law"


"The rock band Black Rebel Motorcycle Club (BRMC) is demanding that the US Department of Homeland Security (DHS) remove a recent video that used their recording of "God's Gonna Cut You Down" without permission.

The band made their disapproval of the DHS very clear, accusing the agency of violating not only copyright law, but fundamental constitutional values.

"It's obvious that you don't respect Copyright Law and Artist Rights any more than you respect Habeas Corpus and Due Process rights," the band wrote. "Not to mention the separation of Church and State per the US Constitution."

"For the record, we hereby order @dhsgov to cease and desist the use of our recording and demand that you immediately pull down your video," the statement continued.

"Oh, and go f–k yourselves," they concluded."

Friday, July 11, 2025

AI must have ethical management, regulation protecting human person, Pope Leo says; The Catholic Register, July 11, 2025

 Carol Glatz , The Catholic Register; AI must have ethical management, regulation protecting human person, Pope Leo says

"Pope Leo XIV urged global leaders and experts to establish a network for the governance of AI and to seek ethical clarity regarding its use.

Artificial intelligence "requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency," Cardinal Pietro Parolin, Vatican secretary of state, wrote in a message sent on the pope's behalf.

The message was read aloud by Archbishop Ettore Balestrero, the Vatican representative to U.N. agencies in Geneva, at the AI for Good Summit 2025 being held July 8-11 in Geneva. The Vatican released a copy of the message July 10."

Join Our Livestream: Inside the AI Copyright Battles; Wired, July 11, 2025

Reece Rogers Wired ; Join Our Livestream: Inside the AI Copyright Battles

"WHAT'S GOING ON right now with the copyright battles over artificial intelligence? Many lawsuits regarding generative AI’s training materials were initially filed back in 2023, with decisions just now starting to trickle out. Whether it’s Midjourney generating videos of Disney characters, like Wall-E brandishing a gun, or an exit interview with a top AI lawyer as he left Meta, WIRED senior writer Kate Knibbs has been following this fight for years—and she’s ready to answer your questions.

Bring all your burning questions about the AI copyright battles to WIRED’s next, subscriber-only livestream scheduled for July 16 at 12pm ET / 9am PT, hosted by Reece Rogers with Kate Knibbs. The event will be streamed right here. For subscribers who are not able to join, a replay of the livestream will be available after the event."

Thursday, July 10, 2025

EU's AI code of practice for companies to focus on copyright, safety; Reuters, July 10, 2025

 , Reuters ; EU's AI code of practice for companies to focus on copyright, safety

"The European Commission on Thursday unveiled a draft code of practice aimed at helping firms comply with the European Union's artificial intelligence rules and focused on copyright-protected content safeguards and measures to mitigate systemic risks.

Signing up to the code, which was drawn up by 13 independent experts, is voluntary, but companies that decline to do so will not benefit from the legal certainty provided to a signatory.

The code is part of the AI rule book, which will come into effect in a staggered manner and will apply to Google owner Alphabet, Facebook owner Meta, OpenAI, Anthropic, Mistral and other companies."

EU AI Act at the Crossroads: GPAI Rules, AI Literacy Guidance and Potential Delays; JD Supra, July 8, 2025

 Mark BoothSteven Farmer, Scott Morton , JD Supra; EU AI Act at the Crossroads: GPAI Rules, AI Literacy Guidance and Potential Delays

"The EU AI Act (AI Act), effective since February 2025, introduces a risk-based regulatory framework for AI systems and a parallel regime for general-purpose AI (GPAI) models. It imposes obligations on various actors, including providers, deployers, importers and manufacturers, and requires that organizations ensure an appropriate level of AI literacy among staff. The AI Act also prohibits “unacceptable risk” AI use cases and imposes rigorous requirements on “high-risk” systems. For a comprehensive overview of the AI Act, see our earlier client alert.

As of mid-2025, the implementation landscape is evolving. This update takes stock of where things stand, focusing on: (i) new guidance on the AI literacy obligations for providers and deployers; (ii) the status of the developing a General-Purpose AI Code of Practice and its implications; and (iii) the prospect of delayed enforcement of some of the AI Act’s key provisions."

Microsoft Pledges $4 Billion Toward A.I. Education; The New York Times, July 9, 2025

  , The New York Times; Microsoft Pledges $4 Billion Toward A.I. Education


[Kip Currier: Not one mention of "ethics" or "AI ethics" in this New York Times article.

So, I sent an email to the reporter today (7/10/25):

Dear Natasha Singer,

I was surprised, and actually disconcerted, to not see any mention of "ethics" and "AI ethics" concepts in your article "Microsoft Pledges $4 Billion Toward A.I. Education". Given well-documented concerns of the vital need for ethical guidelines and frameworks vis-a-vis AI by a wide-range of stakeholders (e.g. religious leaders/Rome Call for AI Ethics, the U.N. AI Advisory Body, academics, etc.), I would have expected your reporting to at least have mentioned potential ethical considerations about this Microsoft funding plan, which carries such significant implications for education and societies.

Best wishes,

Kip Currier]

 

[Excerpt]

"Microsoft said on Wednesday that it planned to give more than $4 billion in cash and technology services to train millions of people to use artificial intelligence, amid an intensifying Silicon Valley crusade to embed chatbots into classrooms.

Microsoft, the maker of the Copilot chatbot, said the resources would go to schools, community colleges, technical colleges and nonprofits. The company is also starting a new training program, Microsoft Elevate Academy, to “deliver A.I. education and skilling at scale” and help 20 million people earn certificates in A.I.

“Microsoft will serve as an advocate to ensure that students in every school across the country have access to A.I. education,” Brad Smith, the president of Microsoft, said in an interview on Sunday.

Microsoft did not immediately specify how much of the more than $4 billion the company planned to dispense as grants and how much of it would be in the form of Microsoft A.I. services and cloud computing credits.

The announcement comes as tech companies are racing to train millions of teachers and students on their new A.I. tools. Even so, researchers say it is too soon to tell whether the classroom chatbots will end up improving educational outcomes or eroding important skills like critical thinking.

On Tuesday, the American Federation of Teachers, a union representing 1.8 million members, said it was setting up a national A.I. training center for educators, with $23 million in funding from Microsoft and two other chatbot makers, OpenAI and Anthropic."

Wednesday, July 9, 2025

How the Vatican Is Shaping the Ethics of Artificial Intelligence; American Enterprise Institute, July 7, 2025

 Shane Tews , American Enterprise Institute; How the Vatican Is Shaping the Ethics of Artificial Intelligence

"Father Paolo Benanti is an Italian Catholic priest, theologian, and member of the Third Order Regular of St. Francis. He teaches at the Pontifical Gregorian University and has served as an advisor to both former Pope Francis and current Pope Leo on matters of artificial intelligence and technology ethics within the Vatican.

Below is a lightly edited and abridged transcript of our discussion...

In the Vatican document, you emphasize that AI is just a tool—an elegant one, but it shouldn’t control our thinking or replace human relationships. You mention it “requires careful ethical consideration for human dignity and common good.” How do we identify that human dignity point, and what mechanisms can alert us when we’re straying from it?

I’ll try to give a concise answer, but don’t forget that this is a complex element with many different applications, so you can’t reduce it to one answer. But the first element—one of the core elements of human dignity—is the ability to self-determine our trajectory in life. I think that’s the core element, for example, in the Declaration of Independence. All humans have rights, but you have the right to the pursuit of happiness. This could be the first description of human rights.

In that direction, we could have a problem with this kind of system because one of the first and most relevant elements of AI, from an engineering perspective, is its prediction capabilities.Every time a streaming platform suggests what you can watch next, it’s changing the number of people using the platform or the online selling system. This idea that interaction between human beings and machines can produce behavior is something that could interfere with our quality of life and pursuit of happiness. This is something that needs to be discussed.

Now, the problem is: don’t we have a cognitive right to know if we have a system acting in that way? Let me give you some numbers. When you’re 65, you’re probably taking three different drugs per day. When you reach 68 to 70, you probably have one chronic disease. Chronic diseases depend on how well you stick to therapy. Think about the debate around insulin and diabetes. If you forget to take your medication, your quality of life deteriorates significantly. Imagine using this system to help people stick to their therapy. Is that bad? No, of course not. Or think about using it in the workplace to enhance workplace safety. Is that bad? No, of course not.

But if you apply it to your life choices—your future, where you want to live, your workplace, and things like that—that becomes much more intense. Once again, the tool could become a weapon, or the weapon could become a tool. This is why we have to ask ourselves: do we need something like a cognitive right regarding this? That you are in a relationship with a machine that has the tendency to influence your behavior.

Then you can accept it: “I have diabetes, I need something that helps me stick to insulin. Let’s go.” It’s the same thing that happens with a smartwatch when you have to close the rings. The machine is pushing you to have healthy behavior, and we accept it. Well, right now we have nothing like that framework. Should we think about something in the public space? It’s not a matter of allowing or preventing some kind of technology. It’s a matter of recognizing what it means to be human in an age of such powerful technology—just to give a small example of what you asked me."

Viewpoint: Don’t let America’s copyright crackdown hand China global AI leadership; Grand Forks Herald, July 5, 2025

 Kent Conrad and Saxby Chambliss , Grand Forks Herald; Viewpoint: Don’t let America’s copyright crackdown hand China global AI leadership


[Kip Currier: The assertion by anti-AI regulation proponents, like the former U.S. congressional authors of this think-piece, that requiring AI tech companies to secure permission and pay for AI training data will kill or hobble U.S. AI entrepreneurship is hyperbolic catastrophizing. AI tech companies can license training data from creators who are willing to participate in licensing frameworks. Such frameworks already exist for music copyrights, for example. AI tech companies just don't want to pay for something if they can get it for free.

AI tech companies would never permit users to scrape up, package, and sell their IP content for free. Copyright holders shouldn't be held to a different standard and be required to let tech companies monetize their IP-protected works without permission and compensation.]

Excerpt]

"If these lawsuits succeed, or if Congress radically rewrites the law, it will become nearly impossible for startups, universities or mid-size firms to develop competitive AI tools."

Why the new rulings on AI copyright might actually be good news for publishers; Fast Company, July 9, 2025

PETE PACHAL, Fast Company; Why the new rulings on AI copyright might actually be good news for publishers

"The outcomes of both cases were more mixed than the headlines suggest, and they are also deeply instructive. Far from closing the door on copyright holders, they point to places where litigants might find a key...

Taken together, the three cases point to a clearer path forward for publishers building copyright cases against Big AI:

Focus on outputs instead of inputs: It’s not enough that someone hoovered up your work. To build a solid case, you need to show that what the AI company did with it reproduced it in some form. So far, no court has definitively decided whether AI outputs are meaningfully different enough to count as “transformative” in the eyes of copyright law, but it should be noted that courts have ruled in the past that copyright violation can occur even when small parts of the work are copied—ifthose parts represent the “heart” of the original.

Show market harm: This looks increasingly like the main battle. Now that we have a lot of data on how AI search engines and chatbots—which, to be clear, are outputs—are affecting the online behavior of news consumers, the case that an AI service harms the media market is easier to make than it was a year ago. In addition, the emergence of licensing deals between publishers and AI companies is evidence that there’s market harm by creating outputs without offering such a deal.

Question source legitimacy: Was the content legally acquired or pirated? The Anthropic case opens this up as a possible attack vector for publishers. If they can prove scraping occurred through paywalls—without subscribing first—that could be a violation even absent any outputs."

U.S. Copyright Office Announces Webinar on Copyright Essentials for Writers; U.S. Copyright Office, Webinar: August 6, 2025 1 PM EDT

 U.S. Copyright Office; U.S. Copyright Office Announces Webinar on Copyright Essentials for Writers

"The U.S. Copyright Office invites you to register to attend the third session in our Copyright Essentials webinar series. The Plot Thickens: Copyright Essentials for Writers will take place on August 6 at 1:00 p.m. eastern time. 

In this session, the Copyright Office will discuss what writers should know about copyright. We will cover information for writers of various literary works—from novels and blogs to poetry, cookbooks, textbooks, and more. The session will also review suitable application options and how our Public Information Office can help you along the way. 

Attendees will also learn copyright basics, answers to commonly asked questions, and where to find Copyright Office educational resources.

Speakers:

  • Jessica Chinnadurai, Attorney-Advisor, Office of Public Information and Education
  • Laura Kaiser, Attorney-Advisor, Office of Public Information and Education

Prior Copyright Essentials webinars can be viewed on our website:

The Copyright Office strategic goal of Copyright for All means making the copyright system as understandable and accessible to as many members of the public as possible, through initiatives including education and outreach. Sign up to stay updated about future webinars in this series."

Monday, July 7, 2025

YouTube Pirates Are Cashing In on Hollywood’s Summer Blockbusters; The New York Times, July 5, 2025

 Nico Grant and , The New York Times; YouTube Pirates Are Cashing In on Hollywood’s Summer Blockbusters

"But the company also had cause to be concerned. In the days after the Disney film’s opening, a pirated version of “Lilo & Stitch” proved to be a hit on YouTube, where more than 200,000 people viewed it, potentially costing Disney millions of dollars in additional sales, according to new research from Adalytics, a firm that analyzes advertising campaigns for brands.

The findings of the research shed new light on the copyright issues that once threatened to upend YouTube’s business. They also show how advertisers have unwittingly supported illicit content on YouTube, and they provide rare data about piracy on the platform."


Saturday, July 5, 2025

Two Courts Rule On Generative AI and Fair Use — One Gets It Right; Electronic Frontier Foundation (EFF), June 26, 2025

TORI NOBLE, Electronic Frontier Foundation (EFF); Two Courts Rule On Generative AI and Fair Use — One Gets It Right

 "Gen-AI is spurring the kind of tech panics we’ve seen before; then, as now, thoughtful fair use opinions helped ensure that copyright law served innovation and creativity. Gen-AI does raise a host of other serious concerns about fair labor practices and misinformation, but copyright wasn’t designed to address those problems. Trying to force copyright law to play those roles only hurts important and legal uses of this technology.

In keeping with that tradition, courts deciding fair use in other AI copyright cases should look to Bartz, not Kadrey."

Ousted US copyright chief argues Trump did not have power to remove her; The Register, July 4, 2025

 Lindsay Clark, The Register; Ousted US copyright chief argues Trump did not have power to remove her

"The White House said the power to remove is aligned with the power to appoint. If there is no Librarian of Congress and the president cannot designate an acting librarian, the president's removal authority extends to inferior officers like the register of copyrights, it argued.

Perlmutter was expunged from office a few days after Librarian of Congress Carla Hayden was also shown the door. Hayden was later replaced by deputy attorney general Todd Blanche and Perlmutter by deputy attorney general Paul Perkins.

In the latest filing this week, Perlmutter's legal team said the administration's claim that it had the power to remove her from an office appointed by the Library of Congress employed a "novel constitutional theory" and "sweeping assertions of power."

The Copyright Office is housed in the Library of Congress, and the librarian oversees the Copyright Office head directly, Perlmutter said. Her filing argued that "neither the law nor common sense requires" that the court should "should stand idly by and do nothing while [the Trump administration] wields unprecedented, and unlawful, authority.""

Thursday, July 3, 2025

Cloudflare Sidesteps Copyright Issues, Blocking AI Scrapers By Default; Forbes, July 2, 2025

Emma Woollacott , Forbes; Cloudflare Sidesteps Copyright Issues, Blocking AI Scrapers By Default

"IT service management company Cloudflare is striking back on behalf of content creators, blocking AI scrapers by default.

Web scrapers are bots that crawl the internet, collecting and cataloguing content of all types, and are used by AI firms to collect material that can be used to train their models.

Now, though, Cloudflare is allowing website owners to choose if they want AI crawlers to access their content, and decide how the AI companies can use it. They can opt to allow crawlers for certain purposes—search, for example—but block others. AI companies will have to obtain explicit permission from a website before scraping."

2012 Video of Bill Moyers on the Freedom to Read and the "Bane of Banning Books"; Ethics, Info, Tech: Contested Voices, Values, Spaces, July 3, 2025

Kip Currier; 2012 Video of Bill Moyers on the Freedom to Read and the "Bane of Banning Books"

Nobody writes more illuminating "I-didn't-know-THAT-about-that-person" obituaries than the New York Times. (I didn't know, for example, that Moyers was an ordained Baptist minister.) And, true to form, the Times has an excellent obituary detailing the service-focused life of Bill Moyers, who passed away on June 26, 2025 at the age of 91. 

The moment I learned of his death, my mind went to a 3-minute video clip of Moyers that I've continued to use in a graduate ethics course lecture I give on Intellectual Freedom and Censorship. The clip is from 2012 but the vital importance of libraries and the freedom to read that Moyers extolls is as timely and essential as ever, given the explosion of book bans and censorship besetting the U.S. right now.

Below is a description of the video clip and this is the video link:

"The Bane of Banned Books

September 25, 2012

In honor of the 30th anniversary of the American Library Association’s “Banned Books Week,” Bill talks about the impact libraries have had on his youth, his dismay over book challenges in modern times, and why censorship is the biggest enemy of truth."

https://billmoyers.com/content/the-bane-of-banned-books/

Wednesday, July 2, 2025

Fair Use or Foul Play? The AI Fair Use Copyright Line; The National Law Review, July 2, 2025

 Jodi Benassi of McDermott Will & Emery  , The National Law Review; Fair Use or Foul Play? The AI Fair Use Copyright Line

"Practice note: This is the first federal court decision analyzing the defense of fair use of copyrighted material to train generative AI. Two days after this decision issued, another Northern District of California judge ruled in Kadrey et al. v. Meta Platforms Inc. et al., Case No. 3:23-cv-03417, and concluded that the AI technology at issue in his case was transformative. However, the basis for his ruling in favor of Meta on the question of fair use was not transformation, but the plaintiffs’ failure “to present meaningful evidence that Meta’s use of their works to create [a generative AI engine] impacted the market” for the books."

Eminem, AI and me: why artists need new laws in the digital age; The Guardian, July 2, 2025

 , The Guardian; Eminem, AI and me: why artists need new laws in the digital age

"Song lyrics, my publisher informs me, are subject to notoriously strict copyright enforcement and the cost to buy the rights is often astronomical. Fat chance as well, then, of me quoting Eminem to talk about how Lose Yourself seeped into the psyche of a generation when he rapped: “You only get one shot, do not miss your chance to blow, this opportunity comes once in a lifetime.”

Oh would it be different if I were an AI company with a large language model (LLM), though. I could scrape from the complete discography of the National and Eminem, and the lyrics of every other song ever written. Then, when a user prompted something like, “write a rap in the style of Eminem about losing money, and draw inspiration from the National’s Bloodbuzz Ohio”, my word correlation program – with hundreds of millions of paying customers and a market capitalisation worth tens if not hundreds of billions of dollars – could answer:

“I still owe money to the money to the money I owe,

But I spit gold out my throat when I flow,

So go tell the bank they can take what they like

I already gave my soul to the mic.”

And that, according to rulings last month by the US courts, is somehow “fair use” and is perplexingly not copyright infringement at all, despite no royalties having been paid to anyone in the process."

Evangelical Report Says AI Needs Ethics; Christianity Today, July/August 2025

 

DANIEL SILLIMAN, Christianity Today; Evangelical Report Says AI Needs Ethics

"The Swiss Evangelical Alliance published a 78-page report on the ethics of artificial intelligence, calling on Christians to “help reduce the misuse of AI” and “set an example in the use of AI by demonstrating how technology can be used responsibly and for the benefit of all.” Seven people worked on the paper, including two theologians, several software engineers and computer science experts, a business consultant, and a futurist. They rejected the idea that Christians should close themselves off to AI, as that would not do anything to mitigate the risks of the developing technology. The group concluded that AI has a lot of potential to do good, if given ethical boundaries and shaped by Christian values such as honesty, integrity, and charity."

Tuesday, July 1, 2025

Inside the battle for control of the Library of Congress; Federal News Network, July 1, 2025

Terry Gerton , Federal News Network; Inside the battle for control of the Library of Congress

"Terry Gerton I’m speaking with Kevin Kosar. He’s a senior fellow at the American Enterprise Institute. So those are interesting theories. And as you mentioned though, the library is a research library, not a lending library. So AI is not going to train itself on printed books. It needs electronic information. What is the impact on the day-to-day operations of the library and the copyright office?

Kevin Kosar Well, right now, certainly, it’s a little anxiety-provoking for people at the Library of Congress, this kind of peculiar state of, are we suddenly going to find ourselves answering to a new boss in the form of the president? They are more than aware of what’s happened at other executive agencies where the president has sent in people from the Department of Government Efficiency and started turning off people’s computers and telling them not to come into work and canceling contracts and doing any number of other things that are, you know, hugely disruptive to workers’ day-to-day life. So there’s that anxiety there. And if this move by the Trump administration plays out, it’s really hard to see what could ultimately occur. One thing that that’s clear to me is that if you have presidential control of the Library of Congress, then the Congressional Research Service is doomed. For those listeners out there who are not familiar with the Congressional Research Service, this is Congress’ think tank. This is about 600 individual civil servants whose job is to provide nonpartisan research, analysis and facts to legislators and their staff to help them better do their jobs. And if you have a president who takes over the library, that president can point the head of the Congressional Research Service and turn it into basically a presidential tool, which would make it useless.

Terry Gerton And the administration has sort of already said that it puts no stock in CRS’s products."

KY library book challenges rose 1,000% in 2024. That’s not a typo. What happened?; Lexington Herald Leader, June 30, 2025

 John Cheves , Lexington Herald Leader; KY library book challenges rose 1,000% in 2024. That’s not a typo. What happened?

"Challenges to Kentucky public library books soared by 1,061% last year, rising from 26 incidents in 2023 to 302 incidents in 2024, according to a recently released state report. That eye-popping number is buried in small type at the bottom of page six of the annual Statistical Report of Kentucky Public Libraries, published in April by the Kentucky Department of Libraries and Archives."

The problems with California’s pending AI copyright legislation; Brookings, June 30, 2025

, Brookings; The problems with California’s pending AI copyright legislation

 "California’s pending bill, AB-412, is a well-intentioned but problematic approach to addressing artificial intelligence (AI) and copyright currently moving through the state’s legislature. If enacted into law, it would undermine innovation in generative AI (GenAI) not only in California but also nationally, as it would impose onerous requirements on both in-state and out-of-state developers that make GenAI models available in California. 

The extraordinary capabilities of GenAI are made possible by the use of extremely large sets of training data that often include copyrighted content. AB-412 arose from the very reasonable concerns that rights owners have in understanding when and how their content is being used for building GenAI models. But the bill imposes a set of unduly burdensome and unworkable obligations on GenAI developers. It also favors large rights owners, which will be better equipped than small rights owners to pursue the litigation contemplated by the bill."


The Court Battles That Will Decide if Silicon Valley Can Plunder Your Work; Slate, June 30, 2025

 BY  , Slate; The Court Battles That Will Decide if Silicon Valley Can Plunder Your Work

"Last week, two different federal judges in the Northern District of California made legal rulings that attempt to resolve one of the knottiest debates in the artificial intelligence world: whether it’s a copyright violation for Big Tech firms to use published books for training generative bots like ChatGPT. Unfortunately for the many authors who’ve brought lawsuits with this argument, neither decision favors their case—at least, not for now. And that means creators in all fields may not be able to stop A.I. companies from using their work however they please...

What if these copyright battles are also lost? Then there will be little in the way of stopping A.I. startups from utilizing all creative works for their own purposes, with no consideration as to the artists and writers who actually put in the work. And we will have a world blessed less with human creativity than one overrun by second-rate slop that crushes the careers of the people whose imaginations made that A.I. so potent to begin with."

AI companies start winning the copyright fight; The Guardian, July 1, 2025

 , The Guardian; AI companies start winning the copyright fight

"The lawsuits over AI-generated text were filed first, and, as their rulings emerge, the next question in the copyright fight is whether decisions about one type of media will apply to the next.

“The specific media involved in the lawsuit – written works versus images versus videos versus audio – will certainly change the fair-use analysis in each case,” said John Strand, a trademark and copyright attorney with the law firm Wolf Greenfield. “The impact on the market for the copyrighted works is becoming a key factor in the fair-use analysis, and the market for books is different than that for movies.”

To Strand, the cases over images seem more favorable to copyright holders, as the AI models are allegedly producing images identical to the copyrighted ones in the training data.

A bizarre and damning fact was revealed in the Anthropic ruling, too: the company had pirated and stored some 7m books to create a training database for its AI. To remediate its wrongdoing, the company bought physical copies and scanned them, digitizing the text. Now the owner of 7m physical books that no longer held any utility for it, Anthropic destroyed them. The company bought the books, diced them up, scanned the text and threw them away, Ars Technica reports. There are less destructive ways to digitize books, but they are slower. The AI industry is here to move fast and break things.

Anthropic laying waste to millions of books presents a crude literalization of the ravenous consumption of content necessary for AI companies to create their products."