Monday, June 30, 2025

Carla Hayden, former Librarian of Congress, speaks on her dismissal, the future of libraries at Philadelphia event; WHYY, June 29, 2025

  Emily Neil, WHYY ; Carla Hayden, former Librarian of Congress, speaks on her dismissal, the future of libraries at Philadelphia event

"Former Librarian of Congress Carla Hayden spoke at the Free Library of Philadelphia Parkway Central Branch on Saturday night, where she sat down for a fireside chat with Ashley Jordan, president and CEO of the African American Museum in Philadelphia...

In his introductory remarks, Kelly Richards, president and director of the Free Library of Philadelphia, said that Hayden has always been a “tireless advocate” for the library systems throughout her career. He said libraries are not just “repositories of knowledge” in a democratic society, but “vibrant centers of community life, education and inclusion.”

“Libraries have a reputation for being a quiet place, but not tonight,” Richards said, as audience members gave Hayden and Jordan a standing ovation when they entered the stage."

The US Copyright Office is wrong about artificial intelligence; The Hill, June 30, 2025

THINH H. NGUYEN AND DEREK E. BAMBAUER, The Hill ; The US Copyright Office is wrong about artificial intelligence

"AI is too important to allow copyright to impede its progress, especially as America seeks to maintain its global competitiveness in tech innovation."

Sunday, June 29, 2025

An AI firm won a lawsuit for copyright infringement — but may face a huge bill for piracy; Los Angeles Times, June 27, 2025

 Michael Hiltzik , Los Angeles Times; An AI firm won a lawsuit for copyright infringement — but may face a huge bill for piracy


[Kip Currier: Excellent informative overview of some of the principal issues, players, stakes, and recent decisions in the ongoing AI copyright legal battles. Definitely worth 5-10 minutes of your time to read and reflect on.

A key take-away, derived from Judge Vince Chhabria's decision in last week's Meta win, is that:

Artists and authors can win their copyright infringement cases if they produce evidence showing the bots are affecting their market. Chhabria all but pleaded for the plaintiffs to bring some such evidence before him: 

“It’s hard to imagine that it can be fair use to use copyrighted books...to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books.” 

But “the plaintiffs never so much as mentioned it,” he lamented.

https://www.latimes.com/business/story/2025-06-27/an-ai-firm-won-a-lawsuit-over-copyright-infringement-but-may-face-a-huge-bill-for-piracy]


[Excerpt]

"Anthropic had to acknowledge a troubling qualification in Alsup’s order, however. Although he found for the company on the copyright issue, he also noted that it had downloaded copies of more than 7 million books from online “shadow libraries,” which included countless copyrighted works, without permission. 

That action was “inherently, irredeemably infringing,” Alsup concluded. “We will have a trial on the pirated copies...and the resulting damages,” he advised Anthropic ominously: Piracy on that scale could expose the company to judgments worth untold millions of dollars...

“Neither case is going to be the last word” in the battle between copyright holders and AI developers, says Aaron Moss, a Los Angeles attorney specializing in copyright law. With more than 40 lawsuits on court dockets around the country, he told me, “it’s too early to declare that either side is going to win the ultimate battle.”...

With billions of dollars, even trillions, at stake for AI developers and the artistic community at stake, no one expects the law to be resolved until the issue reaches the Supreme Court, presumably years from now...

But Anthropic also downloaded copies of more than 7 million books from online “shadow libraries,” which include untold copyrighted works without permission. 

Alsup wrote that Anthropic “could have purchased books, but it preferred to steal them to avoid ‘legal/practice/business slog,’” Alsup wrote. (He was quoting Anthropic co-founder and CEO Dario Amodei.)...

Artists and authors can win their copyright infringement cases if they produce evidence showing the bots are affecting their market."...

The truth is that the AI camp is just trying to get out of paying for something instead of getting it for free. Never mind the trillions of dollars in revenue they say they expect over the next decade — they claim that licensing will be so expensive it will stop the march of this supposedly historic technology dead in its tracks.

Chhabria aptly called this argument “nonsense.” If using books for training is as valuable as the AI firms say they are, he noted, then surely a market for book licensing will emerge. That is, it will — if the courts don’t give the firms the right to use stolen works without compensation."

ACM FAccT ACM Conference on Fairness, Accountability, and Transparency; June 23-26, 2025, Athens, Greece

 

ACM FAccT

ACM Conference on Fairness, Accountability, and Transparency

A computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.

"Algorithmic systems are being adopted in a growing number of contexts, fueled by big data. These systems filter, sort, score, recommend, personalize, and otherwise shape human experience, increasingly making or informing decisions with major impact on access to, e.g., credit, insurance, healthcare, parole, social security, and immigration. Although these systems may bring myriad benefits, they also contain inherent risks, such as codifying and entrenching biases; reducing accountability, and hindering due process; they also increase the information asymmetry between individuals whose data feed into these systems and big players capable of inferring potentially relevant information.

ACM FAccT is an interdisciplinary conference dedicated to bringing together a diverse community of scholars from computer science, law, social sciences, and humanities to investigate and tackle issues in this emerging area. Research challenges are not limited to technological solutions regarding potential bias, but include the question of whether decisions should be outsourced to data- and code-driven computing systems. We particularly seek to evaluate technical solutions with respect to existing problems, reflecting upon their benefits and risks; to address pivotal questions about economic incentive structures, perverse implications, distribution of power, and redistribution of welfare; and to ground research on fairness, accountability, and transparency in existing legal requirements." 

Saturday, June 28, 2025

Global South voices ‘marginalised in AI Ethics’; Gates Cambridge, June 27, 2025

Gates Cambridge; Global South voices ‘marginalised in AI Ethics’

"A Gates Cambridge Scholar is first author of a paper how AI Ethics is sidelining Global South voices, reinforcing marginalisation.

The study, Distributive Epistemic Injustice in AI Ethics: A Co-productionist Account of Global North-South Politics in Knowledge Production, was published by the Association for Computing Machinery and is based on a study of nearly 6,000 AI Ethics publications between 1960 and 2024. Its first author is Abdullah Hasan Safir [2024 – pictured above], who is doing a PhD in Interdisciplinary Design. Other co-authors include Gates Cambridge Scholars Ramit Debnath[2018] and Kerry McInerney [2017].

The findings were recently presented at the ACM’s FAccT conference, considered one of the top AI Ethics conferences in the world. They show that experts from the Global North currently legitimise their expertise in AI Ethics through dynamic citational and collaborative practices in knowledge production within the field, including co-citation and institutional of AI Ethics."

The Anthropic Copyright Ruling Exposes Blind Spots on AI; Bloomberg, June 26, 2025

  , Bloomberg; The Anthropic Copyright Ruling Exposes Blind Spots on AI


[Kip Currier: It's still early days in the AI copyright legal battles underway between AI tech companies and everyone else whose training data was "scarfed up" to enable the former to create lucrative AI tools and products. But cases like this week's Anthropic lawsuit win and another suit won by Meta (with some issues still to be adjudicated regarding the use of pirated materials as AI training data) are finally now giving us some more discernible tea leaves" and "black letter law" as to how courts are likely to rule vis-a-vis AI inputs.

This week being the much ballyhooed 50th anniversary of the so-called "1st summer blockbuster flick" Jaws ("you're gonna need a bigger boat"), these rulings make me think we the public may need a bigger copyright law schema that sets out protections for the creatives making the fuel that enables stratospherically profitable AI innovations. The Jaws metaphor may be a bit on-the-nose, but one can't help but view AI tech companies akin to rapacious sharks that are imperiling the financial survival and long-standing business models of human creators.

As touched on in this Bloomberg article, too, there's a moral argument that what AI tech folks have done with the uncompensated use of creative works, without permission, doesn't mean that it's ethically justifiable simply because a court may say it's legal. Or that these companies shouldn't be required by updated federal copyright legislation and licensing frameworks to fairly compensate creators for the use of their copyrighted works. After all, billionaire tech oligarchs like Zuckerberg, Musk, and Altman would never allow others to do to them what they've done to creatives with impunity and zero contrition.

Are you listening, Congress?

Or are all of you in the pockets of AI tech company lobbyists, rather than representing the needs and interests of all of your constituents and not just the billionaire class.] 


[Excerpt]

"In what is shaping up to be a long, hard fight over the use of creative works, round one has gone to the AI makers. In the first such US decision of its kind, District Judge William Alsup said Anthropic’s use of millions of books to train its artificial-intelligence model, without payment to the sources, was legal under copyright law because it was “transformative — spectacularly so.”...

If a precedent has been set, as several observers believe, it stands to cripple one of the few possible AI monetization strategies for rights holders, which is to sell licenses to firms for access to their work. Some of these deals have already been made while the “fair use” question has been in limbo, deals that emerged only after the threat of legal action. This ruling may have just taken future deals off the table...

Alsup was right when he wrote that “the technology at issue was among the most transformative many of us will see in our lifetimes.”...

But that doesn’t mean it shouldn’t pay its way. Nobody would dare suggest Nvidia Corp. CEO Jensen Huang hand out his chips free. No construction worker is asked to keep costs down by building data center walls for nothing. Software engineers aren’t volunteering their time to Meta Platforms Inc. in awe of Mark Zuckerberg’s business plan — they instead command salaries of $100 million and beyond. 

Yet, as ever, those in the tech industry have decided that creative works, and those who create them, should be considered of little or no value and must step aside in service of the great calling of AI — despite being every bit as vital to the product as any other factor mentioned above. As science-fiction author Harlan Ellison said in his famous sweary rant, nobody ever wants to pay the writer if they can get away with it. When it comes to AI, paying creators of original work isn’t impossible, it’s just inconvenient. Legislators should leave companies no choice."

Friday, June 27, 2025

No One Is in Charge at the US Copyright Office; Wired, June 27, 2025

"It’s a tumultuous time for copyright in the United States, with dozens of potentially economy-shaking AI copyright lawsuits winding through the courts. It’s also the most turbulent moment in the US Copyright Office’s history. Described as “sleepy” in the past, the Copyright Office has taken on new prominence during the AI boom, issuing key rulings about AI and copyright. It also hasn’t had a leader in more than a month...

As the legality of the ouster is debated, the reality within the office is this: There’s effectively nobody in charge. And without a leader actually showing up at work, the Copyright Office is not totally business-as-usual; in fact, there’s debate over whether the copyright certificates it’s issuing could be challenged."

Getty drops copyright allegations in UK lawsuit against Stability AI; AP, June 25, 2025

 KELVIN CHAN, AP; Getty drops copyright allegations in UK lawsuit against Stability AI

"Getty Images dropped copyright infringement allegations from its lawsuit against artificial intelligence company Stability AI as closing arguments began Wednesday in the landmark case at Britain’s High Court. 

Seattle-based Getty’s decision to abandon the copyright claim removes a key part of its lawsuit against Stability AI, which owns a popular AI image-making tool called Stable Diffusion. The two have been facing off in a widely watched court case that could have implications for the creative and technology industries."

Denmark to tackle deepfakes by giving people copyright to their own features; The Guardian, June 27, 2025

  , The Guardian; Denmark to tackle deepfakes by giving people copyright to their own features

"The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice.

The Danish government said on Thursday it would strengthen protection against digital imitations of people’s identities with what it believes to be the first law of its kind in Europe."

Wednesday, June 25, 2025

Judge dismisses authors’ copyright lawsuit against Meta over AI training; AP, June 25, 2025

 MATT O’BRIEN AND BARBARA ORTUTAY, AP; Judge dismisses authors’ copyright lawsuit against Meta over AI training

"Although Meta prevailed in its request to dismiss the case, it could turn out to be a pyrrhic victory. In his 40-page ruling, Chhabria repeatedly indicated reasons to believe that Meta and other AI companies have turned into serial copyright infringers as they train their technology on books and other works created by humans, and seemed to be inviting other authors to bring cases to his court presented in a manner that would allow them to proceed to trial.

The judge scoffed at arguments that requiring AI companies to adhere to decades-old copyright laws would slow down advances in a crucial technology at a pivotal time. “These products are expected to generate billions, even trillions of dollars for the companies that are developing them. If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders for it.”

Ball State University Libraries Launches Research Guide on Ethical AI Use; Ball State University, June 24, 2025

 Ball State University; Ball State University Libraries Launches Research Guide on Ethical AI Use

"In an era in which artificial intelligence tools are rapidly reshaping how we access and share information, Ball State University Libraries has introduced a new research guide to help students, faculty, staff, and community members use AI more thoughtfully and effectively.

The interactive guide, now available at bsu.libguides.com, equips users with foundational skills to assess the credibility, accuracy, and ethical implications of generative AI tools like ChatGPT and image generators. Through five short videos and practical examples, the guide teaches users to identify potential misinformation, recognize AI-generated bias, and apply AI output in meaningful and responsible ways.

Key learning outcomes include:"

Tuesday, June 24, 2025

Anthropic’s AI copyright ‘win’ is more complicated than it looks; Fast Company, June 24, 2025

CHRIS STOKEL-WALKER, Fast Company;Anthropic’s AI copyright ‘win’ is more complicated than it looks

"And that’s the catch: This wasn’t an unvarnished win for Anthropic. Like other tech companies, Anthropic allegedly sourced training materials from piracy sites for ease—a fact that clearly troubled the court. “This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,” Alsup wrote, referring to Anthropic’s alleged pirating of more than 7 million books.

That alone could carry billions in liability, with statutory damages starting at $750 per book—a trial on that issue is still to come.

So while tech companies may still claim victory (with some justification, given the fair use precedent), the same ruling also implies that companies will need to pay substantial sums to legally obtain training materials. OpenAI, for its part, has in the past argued that licensing all the copyrighted material needed to train its models would be practically impossible.

Joanna Bryson, a professor of AI ethics at the Hertie School in Berlin, says the ruling is “absolutely not” a blanket win for tech companies. “First of all, it’s not the Supreme Court. Secondly, it’s only one jurisdiction: The U.S.,” she says. “I think they don’t entirely have purchase over this thing about whether or not it was transformative in the sense of changing Claude’s output.”"

The copyright war between the AI industry and creatives; Financial Times, June 23, 2025

 , Financial Times ; The copyright war between the AI industry and creatives

"One is that the government itself estimates that “creative industries generated £126bn in gross value added to the economy [5 per cent of GDP] and employed 2.4 million people in 2022”. It is at the very least an open question whether the value added of the AI industry will ever be of a comparable scale in this country. Another is that the creative industries represent much of the best of what the UK and indeed humanity does. The idea of handing over its output for free is abhorrent...

Interestingly, for much of the 19th century, the US did not recognise international copyright at all in its domestic law. Anthony Trollope himself complained fiercely about the theft of the copyright over his books."

Anthropic wins key US ruling on AI training in authors' copyright lawsuit; Reuters, June 24, 2025

, Reuters; Anthropic wins key US ruling on AI training in authors' copyright lawsuit

 "A federal judge in San Francisco ruled late on Monday that Anthropic's use of books without permission to train its artificial intelligence system was legal under U.S. copyright law.

Siding with tech companies on a pivotal question for the AI industry, U.S. District Judge William Alsup said Anthropic made "fair use" of books by writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson to train its Claude large language model.

Alsup also said, however, that Anthropic's copying and storage of more than 7 million pirated books in a "central library" infringed the authors' copyrights and was not fair use. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement."

Study: Meta AI model can reproduce almost half of Harry Potter book; Ars Technica, June 20, 2025

TIMOTHY B. LEE  , Ars Techcnica; Study: Meta AI model can reproduce almost half of Harry Potter book

"In recent years, numerous plaintiffs—including publishers of books, newspapers, computer code, and photographs—have sued AI companies for training models using copyrighted material. A key question in all of these lawsuits has been how easily AI models produce verbatim excerpts from the plaintiffs’ copyrighted content.

For example, in its December 2023 lawsuit against OpenAI, The New York Times Company produced dozens of examples where GPT-4 exactly reproduced significant passages from Times stories. In its response, OpenAI described this as a “fringe behavior” and a “problem that researchers at OpenAI and elsewhere work hard to address.”

But is it actually a fringe behavior? And have leading AI companies addressed it? New research—focusing on books rather than newspaper articles and on different companies—provides surprising insights into this question. Some of the findings should bolster plaintiffs’ arguments, while others may be more helpful to defendants.

The paper was published last month by a team of computer scientists and legal scholars from Stanford, Cornell, and West Virginia University. They studied whether five popular open-weight models—three from Meta and one each from Microsoft and EleutherAI—were able to reproduce text from Books3, a collection of books that is widely used to train LLMs. Many of the books are still under copyright."

Copyright Cases Should Not Threaten Chatbot Users’ Privacy; Electronic Frontier Foundation (EFF), June 23, 2025

TORI NOBLE, Electronic Frontier Foundation (EFF); Copyright Cases Should Not Threaten Chatbot Users’ Privacy

"Like users of all technologies, ChatGPT users deserve the right to delete their personal data. Nineteen U.S. States, the European Union, and a host of other countries already protect users’ right to delete. For years, OpenAI gave users the option to delete their conversations with ChatGPT, rather than let their personal queries linger on corporate servers. Now, they can’t. A badly misguided court order in a copyright lawsuit requires OpenAI to store all consumer ChatGPT conversations indefinitely—even if a user tries to delete them. This sweeping order far outstrips the needs of the case and sets a dangerous precedent by disregarding millions of users’ privacy rights.

The privacy harms here are significant. ChatGPT’s 300+ million users submit over 1 billion messages to its chatbots per dayoften for personal purposes. Virtually any personal use of a chatbot—anything from planning family vacations and daily habits to creating social media posts and fantasy worlds for Dungeons and Dragons games—reveal personal details that, in aggregate, create a comprehensive portrait of a person’s entire life. Other uses risk revealing people’s most sensitive information. For example, tens of millions of Americans use ChatGPT to obtain medical and financial information. Notwithstanding other risks of these uses, people still deserve privacy rights like the right to delete their data. Eliminating protections for user-deleted data risks chilling beneficial uses by individuals who want to protect their privacy."

Monday, June 23, 2025

Pope: Intelligence is seeking life's true meaning, not having reams of data; United States Conference of Catholic Bishops, June 20, 2025

 Carol Glatz , United States Conference of Catholic Bishops; Pope: Intelligence is seeking life's true meaning, not having reams of data

"Access to vast amounts of data and information is not the same thing as having intelligence, which is uniquely human and requires being open to truth, goodness and the real meaning of life, Pope Leo XIV told AI experts and executives.

"Authentic wisdom has more to do with recognizing the true meaning of life than with the availability of data," he said in a written message released by the Vatican June 20.

"Acknowledging and respecting what is uniquely characteristic of the human person is essential to the discussion of any adequate ethical framework for the governance of AI," he wrote.

The message, written in English, was addressed to people attending the second annual Rome conference on AI, Ethics and the Future of Corporate Governance being held in Rome and at the Vatican June 19-20.

The conference "brings together executives from leading AI companies as well as large enterprises using AI with policymakers, scholars, ethicists and lawyers to consider in a holistic way the challenges facing the ethics and governance of AI, both for companies developing this revolutionary technology as well as the enterprises incorporating AI into their businesses," according to the event's website."

The Pope has a message for AI executives; Quartz, June 20, 2025

 Michael Barclay, Quartz; The Pope has a message for AI executives

Pope Leo wants AI to be regulated ethically, while the U.S. is poised to bar any state-level regulations for a decade

"At the Second Annual Rome Conference on Artificial Intelligence on Friday, Pope Leo talked about where AI is headed.

The event was attended by Vatican officials, American academics, and Silicon Valley executives from Google, OpenAI, Anthropic, Meta, and more. The new pope urged serious reflection on “the inherently ethical dimension of AI, as well as its responsible governance...

Pope Leo said AI’s benefits and risks must be evaluated using a “superior ethical criterion,” adding that it “challenges all of us to reflect more deeply on the true nature and uniqueness of our shared human dignity.” He added that “access to data — however extensive — must not be confused with intelligence.”"

Sunday, June 22, 2025

Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican; CNN, June 20, 2025

  and , CNN ; Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican

"Pope Leo XIV says tech companies developing artificial intelligence should abide by an “ethical criterion” that respects human dignity.

AI must take “into account the well-being of the human person not only materially, but also intellectually and spiritually,” the pope said in a message sent Friday to a gathering on AI attended by Vatican officials and Silicon Valley executives.

“No generation has ever had such quick access to the amount of information now available through AI,” he said. But “access to data — however extensive — must not be confused with intelligence.”

He also expressed concern about AI’s impact on children’s “intellectual and neurological development,” writing that “society’s well-being depends upon their being given the ability to develop their God-given gifts and capabilities.”

That statement from the Pope came on the second of a two-day meeting for tech leaders in Rome to discuss the societal and ethical implications of artificial intelligence. The second annual Rome Conference on AI was attended by representatives from AI leaders including Google, OpenAI, Anthropic, IBM, Meta and Palantir along with academics from Harvard and Stanford and representatives of the Holy See.

The event comes at a somewhat fraught moment for AI, with the rapidly advancing technology promising to improve worker productivity, accelerate research and eradicate disease, but also threatening to take human jobsproduce misinformationworsen the climate crisis and create even more powerful weapons and surveillance capabilities. Some tech leaders have pushed back against regulationsintended to ensure that AI is used responsibly, which they say could hinder innovation and global competition.

“In some cases, AI has been used in positive and indeed noble ways to promote greater equality, but there is likewise the possibility of its misuse for selfish gain at the expense of others, or worse, to foment conflict and aggression,” Leo said in his Friday statement."

Saturday, June 21, 2025

How a Single Court Case Could Determine the Future of Book Banning in America; Literary Hub, June 17, 2025

, Literary Hub; How a Single Court Case Could Determine the Future of Book Banning in America

"Bottom line: in Texas, Louisiana, and Mississippi–the states covered by the Fifth Circuit–libraries are free to remove books for any reason. 

The plaintiffs now face a choice: accept the Fifth Circuit’s ruling or appeal to the Supreme Court. If there is an appeal, the court may not accept it. Over 7,000 cases are appealed to the high court each year, and it hears only 100-150–less than two percent. Yet I think Little v. Llano County has a good chance of making the docket. 

For one thing, the Fifth Circuit’s en banc reversal of its own panel’s ruling suggests the need for the high court to step in. Second, this Supreme Court has been eager to revisit earlier precedents. In the last few years, it has curtailed abortion protections, ended Chevron deference, and canceled affirmative action in college admissions–all long-standing, seemingly bedrock principles. Why not target Pico, especially since it wasn’t a decisive ruling to begin with?

Third, unlike most book ban cases, Little pertains not to a school library but a public one. Public libraries, according to UCLA professor Eugene Volokh, are not like school libraries. “I tentatively think a public school,” Volokh wrote, “is entitled to decide which viewpoints to promote through its own library,” whereas public libraries “are much more about giving more options to readers, rather than about teaching particular skills and attitudes to students.” 

Public libraries also serve more people–an entire county rather than a school system. Remember Judge Duncan’s belief that anyone who wants a certain book “can buy it or borrow it from somewhere else”? Llano County is small and rural, and many of its residents may not have the purchase option. For them, a book being unavailable in a library is a de facto ban. The pro-library organization EveryLibrary agrees, writing that the Fifth Circuit’s opinion “reveals an indifference to the lived reality of millions of Americans for whom public libraries are their only or primary means of access to books.” 

Here’s hoping that, if Little or any other book ban case ends up before this Supreme Court, those nine justices will consider the issue thoughtfully, creatively, and most important, impartially."

Bill Clinton says he wondered if Trump administration might try to ban his latest book; The Hill, June 18, 2025

  JUDY KURTZ , The Hill; Bill Clinton says he wondered if Trump administration might try to ban his latest book

"Maya Angelou, who read the inaugural poem at my first inauguration — wrote it, and read it and was a great human being — the first thing the White House did was to ban her book, ‘I Know Why the Caged Bird Sings,'” Clinton recalled.

Angelou’s 1969 autobiography was reportedly one of nearly 400 books that was pulled from the U.S. Naval Academy library in April as part of an effort to remove titles containing diversity, equity and inclusion content. 

Calling it a “magnificent book,” Clinton reflected on Angelou’s personal story about a child who “loses the ability to speak for a couple of years because she was abused, and then she blooms.”

“I couldn’t figure out why that was a problem,” Clinton said.

“I don’t like book banning,” the 42nd president added.

“I wasn’t ever for banning books that were full of things they said about me that weren’t true,” Clinton said.

“It never occurred to me that I should stop you from reading them.”"

Unbound Pages: Authors Against Book Bans fights for the freedom to read; WGBH, June 20, 2025

 Andrea Asuaje, WGBH; Unbound Pages: Authors Against Book Bans fights for the freedom to read

"Thousands of books are facing scrutiny throughout the country as the book-banning movement continues to gain support, from Florida, to Wisconsin and even New Hampshire. Now, hundreds of authors are using their voices off the page to spread awareness about the effect book bans have on democracy and free speech.

The organization Authors Against Book Bans (AABB), which was formed in 2024, is focused on the freedom to read and composed of authors from all genres who write for readers of all ages. Many of the members have had their work challenged or banned, like AABB board member, Adib Khorram, author of several books including the often-challenged or banned “Darius The Great Is Not Okay.” The book and its sequel, “Darius The Great Deserves Better,” have come under fire for addressing race, sexuality and, according to Khorram, Marxist ideology."

Trump administration could change the way we read, from book bans to author talks; USA TODAY, June 18, 2025

 Clare Mulroy , USA TODAY; Trump administration could change the way we read, from book bans to author talks

"Hazelwood's tour snag sparked a discussion on book communities about how President Donald Trump's recent policies would trickle down to publishing. Amid book banning, border policies, new anti-DEI sentiments and federal library grant cuts, these are the ways the new administration may impact readers. 

Trump administration's policies shake author tour plans...

Authors worry about impact of Trump, DEI backslide...

Grant cuts threaten libraries, public spaces for readers...

Book banning continues in libraries, classrooms"

Carla Hayden on her time as a pioneering librarian of Congress and getting fired by Trump; PBS News, June 20, 2025

  , PBS News; Carla Hayden on her time as a pioneering librarian of Congress and getting fired by Trump

"Geoff Bennett: What effect do you believe censorship has on our democracy?

  • Dr. Carla Hayden:

    As Alberto Manguel said, as centuries of dictators, tyrants, slave owners and other illicit holders of power have known, an illiterate crowd is the easiest to rule. And if you cannot restrict a people from learning to read, you must limit its scope.

    And that is the danger of making sure that people don't have access.

  • Geoff Bennett:

    She says she will keep advocating for her beliefs and feels bolstered by support from elected officials on both sides of the aisle, as well as from people across the country.

    She shared that her 93-year-old mother has been cataloging the notes and messages she's received. A former president of the American Library Association, Hayden is set to address some of its 50,000 members at their annual meeting. This year's agenda, she says, takes on new urgency.

  • Dr. Carla Hayden:

    How to help communities support their libraries, how to deal with personal attacks that libraries are having, even death threats in some communities for libraries.

    So this convening of librarians that are in schools, universities, public libraries will be really our rally. We have been called feisty fighters for freedom."