Monday, April 13, 2026

Nobody is governing AI; Quartz, April 8, 2026


Jackie Snow, Quartz ; Nobody is governing AI

Artificial intelligence is advancing faster than lawmakers can regulate it, while global AI governance fragments in real time

"Artificial intelligence is now making hiring decisions, tutoring children, optimizing power grids, and targeting weapons systems. The rules governing any of that are, almost everywhere, either nonexistent, stalled in committee, or under active attack.

In the United States, the federal government has spent three years producing executive orders, frameworks, and guidelines, none of which have become law. States that tried to fill the gap have been threatened with funding cuts and lawsuits. In Europe, the most ambitious AI legislation in the world is being delayed or softened before most of it has even taken effect. The technology, meanwhile, has not paused for any of this."

OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash; Los Angeles Times, April 13, 2026

 Queenie Wong , Los Angeles Times; OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash

"Hours after a Molotov cocktail was thrown at his San Francisco home, OpenAI Chief Executive Sam Altman addressed the criticism surrounding artificial intelligence that appears to have been the impetus for the attack. 

In a lengthy blog post, Altman shared a family photo of his husband and child, stating he hopes it might convince people not to repeat the attack despite their opinions on him.

The San Francisco Police Department arrested a 20-year-old man in connection with the Friday morning attack but did not publicly comment on the motivation. Altman and his company, the maker of ChatGPT, have been at the center of a heated debate about whether AI will change the world for better or worse."

It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears; The Guardian, April 8, 2026

 , The Guardian; It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears

"I’ll confess: prior to this moment of giving the subject more than two seconds’ thought, my anxieties around AI were extremely localised. I thought in immediate terms of my own household income, and beyond that, of how the job market might look 10 years from now when my children graduate. I wondered if I should boycott ChatGPT, many of whose architects support Trump, and decided that, yes, I should – an easy sacrifice because I don’t use it in the first place.

Anything bigger than that seemed fanciful. Last year, when Karen Hao’s book Empire of AI was published, it laid out a case against Sam Altman and his company, OpenAI, that briefly pierced the tedium of the discourse to say that Altman’s leadership is cult-like and blind to cost – no different, in other words, to his tech predecessors, except much more dangerous. Still, I didn’t read the book.

The investigation this week in the New Yorker offers a lower-commitment on-ramp to the subject, while giving the casual reader an exciting opportunity: to ask ChatGPT, the AI-powered chatbot created by Altman’s OpenAI, to summarise the key findings of a piece that is highly critical of ChatGPT and Altman."

Sam Altman May Control Our Future—Can He Be Trusted?; The New Yorker, April 6, 2026

 and , The New Yorker; Sam Altman May Control Our Future—Can He Be Trusted?

"Not all the tendencies that make chatbots dangerous are glitches; some are by-products of how the systems are built. Large language models are trained, in part, on human feedback, and humans tend to prefer agreeable responses. Models often learn to flatter users, a tendency known as sycophancy, and will sometimes prioritize this over honesty. Models can also make things up, a tendency known as hallucination. Major A.I. labs have documented these problems, but they sometimes tolerate them. As models have grown more complex, some hallucinate with more persuasive fabrications. In 2023, shortly before his firing, Altman argued that allowing for some falsehoods can, whatever the risks, confer advantages. “If you just do the naïve thing and say, ‘Never say anything that you’re not a hundred per cent sure about,’ you can get a model to do that,” he said. “But it won’t have the magic that people like so much.”"

The three realities of AI; Axios, April 13, 2026

 Ina Fried , Axios; The three realities of AI

"Three distinct camps are forming around AI: power users, doubters and resisters.

Why it matters: AI isn't just advancing — it's fragmenting how people see the world.

The big picture: The disconnect is showing up everywhere — from job-loss fears to data center protests to actual violence.


Doubters still see AI as glitchy chatbots and viral fails. They aren't using its full capabilities.


Power users run AI agents around the clock, trading tips on how to automate work and decision-making. 


Resisters understand AI, think they know where it's headed and want no part of it."

Sunday, April 12, 2026

Is AI the greatest art heist in history?; The Guardian, April 12, 2026

 , The Guardian; Is AI the greatest art heist in history? 

New technologies of reproduction are plundering the art world – and getting away with it

"In 2026, its easy to see why generative AI is bad. The internet has nicknamed its excretions “slop”. The CEOs of AI companies prance about on stage like supervillains, bragging that their products will eliminate vast swathes of work. Generative AI requires sacrificing the world’s water to feed its hideous data centres. Around the globe, chatbots induce schizophrenic delusions and urge teens to kill themselves – all while turning users brains to mush.

Who could have predicted this? Artists, that’s who...

When tech boosters want to demonise resistance, they invoke the luddites. By their telling, the luddites were primitive idiots, who smashed machines they were too stupid to understand. History though, tells a different story. As recounted by Brian Merchant’s sublime work Blood in the Machineluddites were skilled artisans, fighting for their way of life against the “satanic mills” – textile sweatshops powered by child semi-slaves. Forbidden from unionising, luddites smashed machines as a protest tactic. And they did not lose to the inevitable march of progress. They lost to physical force. The government called in troops, and the luddites were either executed or shipped to penal colonies in Australia.

Artists too are fighting for a way of life. And if we are too disorganised to triumph, that will be everyone’s loss. AI companies’ inappropriate scraping may have started with the work of illustrators like me, but it has grown to encompass everything else. It extends to the billions of dollars that these companies squander each year, to the carbon they burn, to the rare minerals in their chips, to the land on which their data centres sit, to culture, education, sanity and our very imaginations. In return for the entirety of the human and non-human world, the tech lords can only offer us dystopia. Their fantasy future contains neither meaningful work nor real communities, just robots chattering to each other, leaving nothing for us."

Texas Considers Required Reading List for Schools, Which Includes the Bible; The New York Times, April 7, 2026

 , The New York Times; Texas Considers Required Reading List for Schools, Which Includes the Bible

"Texas education officials are considering sweeping changes to English and social studies instruction that would put readings from the Bible on a new state-required reading list for millions of public school students...

A draft of the list, proposed by the Texas Education Agency, outlines more than 200 texts, with widely recognized classics such as “The Very Hungry Caterpillar” by Eric Carle for kindergartners, “A Wrinkle in Time” by Madeleine L’Engle for seventh graders and the Rev. Dr. Martin Luther King Jr.’s “I Have a Dream” speech for eighth graders.

But it also includes passages from the Bible in middle and high school, raising questions about the separation of church and state.

second proposed list, from Will Hickman, a Republican member of the state board, would require fewer books overall and include biblical passages starting in elementary school.

Supporters say the Bible excerpts, which include the story of David and Goliath from the Old Testament and a meditation on love from First Corinthians, have important literary value...

Democratic members of the state board have criticized the list for a lack of diversity in a state where Hispanic and Black students represent a majority of public school children. The state agency’s version includes texts from Black historical figures such as Langston Hughes and Frederick Douglass, for example, but has relatively few Black and Hispanic authors overall."

As AI pushes students to reconsider majors, universities struggle to adapt; The Hill, April 12, 2026

  LEXI LONAS COCHRAN  , The Hill; As AI pushes students to reconsider majors, universities struggle to adapt

"A recent poll shows AI’s increasing role in how students decide on college majors, creating a rapidly developing situation for universities that are still struggling to determine how the technology will shape higher education. 

The Lumina Foundation-Gallup 2026 State of Higher Education survey found 47 percent of currently enrolled college students have thought about switching majors “a great deal” or a “fair amount” over AI concerns." 

The most 'ethical' AI company might also be the web's biggest freeloader; Business Insider, April 12, 2026

 , Business Insider ; The most 'ethical' AI company might also be the web's biggest freeloader

"Cloudflare's latest data offers one of the clearest snapshots yet of how AI companies consume the web, and how little they give back.

The company, which powers roughly 20% of the internet, tracks how AI bots crawl websites versus how often those platforms send users back through referrals. The resulting "crawl-to-refer" ratio is a simple yet telling metric: how much value is extracted compared to returned.

The early April 2026 figures are stark. Anthropic is the worst by a wide margin, with a ratio of 8,800 to 1. That means its bots crawl webpages 8,800 times for every referral sent...

Anthropic's position is particularly striking given its reputation for being "ethical." That reputation has made it a preferred choice among some users who want to support more responsible AI development. This data highlights a different dimension of ethics — how companies interact with the broader web ecosystem that provides information for AI model outputs."

Saturday, April 11, 2026

Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders.; The Washington Post, April 11, 2026

 

, The Washington Post ; Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders.

The artificial intelligence company asked religious leaders for guidance on building a moral chatbot.


"The company hosted about 15 Christian leaders from Catholic and Protestant churches, academia and the business world at its headquarters in late March for a two-day summit that included discussion sessions and a private dinner with senior Anthropic researchers, according to four participants who spoke with The Washington Post.


Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”"

Did AI kill my job, or open up a next chapter?; Public Source, April 10, 2026

[Kip Currier: I posted the following note and excerpt from this Public Source essay for the graduate students in my The Information Professional in Communities course this term:

I'm sharing this Pittsburgh local journalism first person essay by writer Austin Harvey, which I serendipitously came across and have posted to all of my blogs. Given the work that I currently do as a university faculty instructor, the piece raises thorny questions and considerations for me about what information centers/professionals can do to assist and/or "be there" for individuals and communities who are being displaced by AI.

Also, in what ways do academic programs like this one need to better prepare MLIS students to navigate AI-related positive and negative societal changes?

In what ways will information centers/professionals, as well as information center users, potentially be displaced by AI?

In what ways can information centers/professionals proactively adapt and/or manage this disruptive technological change?

What kinds of advocacy and actions by information professionals are required and needed?

Who are potential partners with whom information professionals can confer and collaborate on behalf of communities to strategically address present and future AI-fueled impacts?]


First-person essay by Austin Harvey, Public Source; Did AI kill my job, or open up a next chapter?

"Many writers feared that they would be the first ones to lose their jobs to AI. I did not share this fear, though I feel my heart rate spike every time I use an em-dash now — and you can pry them from my cold, dead hands when I’m gone. I saw value in human writing. I still do, and believe most people agree. We’ve gotten better at identifying AI-generated text, and while there are certainly a litany of websites out there publishing AI-generated articles, readers generally seem averse to them now. 

I was foolish to think none of this would affect me. 

I wasn’t replaced by AI. In fact, ATI’s editors made it very clear that they would never publish AI-generated articles. But AI was still a disruptive force. Search traffic fell. Google changed the rules on SEO and AdSense. We had editors quit or move on to other jobs, but we never hired anyone else to fill their positions. Our team of 12 became a team of seven, and for the better part of two years we were struggling to put out enough content to satisfy the algorithms. I was burning out constantly, still holding on to the idea that this was surely better than self-employment. 

Then, I was called into a meeting and told I was being let go at the end of January...

It wasn’t that I was replaced by AI, or that AI-generated articles were taking all of the search traffic; it was that a great number of people have stopped reading entirely, opting instead to simply ask ChatGPT or Gemini for answers to their questions. It’s an extension of the same issue that has caused many local news outlets to cease operations or cut staff."

How AI is getting better at finding security holes; NPR, April 11, 2026

, NPR; How AI is getting better at finding security holes

"In the past few months, AI models have gone from producing hallucinations to becoming effective at finding security flaws in software, according to developers who maintain widely used cyber infrastructure. Those pieces of software, among other things, power operating systems and transfer data for things connected to the internet.

While these new capabilities can help developers make software more secure, they can also be weaponized by hackers and nation states to steal information and money or disrupt critical services.

The latest development of AI's cyber capability came on Tuesday, when AI lab Anthropic announced it had developed a powerful new model the company believes could "reshape cybersecurity." It said that its latest model, Mythos Preview, was able to find "high-severity vulnerabilities, including some in every major operating system and web browser." Not only that, the model was better at coming up with ways to exploit the vulnerabilities it found, which means malicious actors can more effectively achieve their goals.

For now, the company is limiting the access to the model to around 50 select companies and organizations "in an effort to secure the world's most critical software." They're calling the collaboration Project Glasswing, naming it after a butterfly species with transparent wings.

Anthropic says the risk for misuse is so high that it has no plans to release this particular model to the general public, according to the announcement, but it will release other related models. "Our eventual goal is to enable our users to safely deploy Mythos-class models at scale," the company wrote."

Addison Rae Files Copyright Claim Against DHS for Using 'Diet Pepsi' in ICE-Related Video: 'Taylor Swift Could Never!'; International Business Times UK, April 10, 2026

, International Business Times UK; Addison Rae Files Copyright Claim Against DHS for Using 'Diet Pepsi' in ICE-Related Video: 'Taylor Swift Could Never!'

"Addison Rae has asserted her control over her creative catalogue by successfully removing her music from a government-led promotional campaign. The pop singer and social media personality recently filed a copyright claim against the Department of Homeland Security after her hit single, 'Diet Pepsi', was featured in a video produced by Immigration and Customs Enforcement (ICE).

The legal intervention resulted in the media being disabled across multiple social platforms, effectively silencing the government's use of her intellectual property. This move places Rae among a growing list of pop stars who have refused to allow their work to be associated with federal enforcement activities.

Addison Rae Takes Legal Action as DHS Disables' Diet Pepsi' Video

The controversy began when users online noticed Rae's latest track accompanying footage from the government agency. Rae moved quickly to file the claim, ensuring the content was stripped of its audio or removed entirely."

Why A Potential Supreme Court Copyright Decision Could Change The Music Business; Forbes, April 9, 2026

Bill Hochberg, Forbes; Why A Potential Supreme Court Copyright Decision Could Change The Music Business 

"There’s billions at stake for music companies and investors who may get a big hit – and not the kind they like – if the U.S. Supreme Court decides music creators and their heirs can get back their copyrights worldwide."

Friday, April 10, 2026

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters; Wired, April 9, 2026

 MAXWELL ZEFF , Wired; OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause “critical harm.”

"OPENAI IS THROWING its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage."

Amazon upsets ebook lovers by ending support for old Kindle devices; Guardian, April 9, 2026

 , The Guardian; Amazon upsets ebook lovers by ending support for old Kindle devices


Up to 2m e-readers made before 2013 will no longer be able to download new titles

"Amazon is to stop supporting older Kindle models leaving longtime ebook fans unable to access new content from the Kindle store.

Devices released during or before 2012 will no longer receive updates from 20 May, affecting owners of older Kindles, including the earliest models such as the Touch and some Fire tablets. It is thought that 2m e-readers could be affected.

Users will still be able to read ebooks they have downloaded, and their accounts and their Kindle library will remain accessible on mobile and desktop apps. Active users have been offered discounts to help “transition to newer devices”. Amazon said performing a factory reset on affected Kindles would make them unusable.

Disappointed users have vented their frustration online, including in comments on The Verge, accusing Amazon of “causing waste at a large scale” and saying their devices would be reduced to a paperweight despite still working."

Thursday, April 9, 2026

Claude Mythos Is Everyone’s Problem; The Atlantic, April 9, 2026

 Matteo Wong , The Atlantic; Claude Mythos Is Everyone’s Problem

What happens when AI can hack everything?

"These companies can or could soon have the capability to launch major cyberattacks, conduct mass surveillance, influence military operations, cause huge swings in financial and labor markets, and reorient global supply chains. In theory, nothing governs these companies other than their own morals and their investors. They are developing the power to upend nations and economies. These are the AI superpowers."

Judge slams key OpenAI witness in copyright infringement case for ‘hazy recollections’; New York Daily News via Chicago Tribune, April 9, 2026

 , New York Daily News via Chicago Tribune; Judge slams key OpenAI witness in copyright infringement case for ‘hazy recollections’

"An unimpressed Manhattan judge ordered a corporate representative for OpenAI to undergo a second deposition after finding he failed to answer “even the simplest questions” the first time around about what the company has described as efforts to limit chatbots from stealing writers’ work.

​Magistrate Judge Ona Wang, in a sharply-worded 11-page order Tuesday, said OpenAI had been put on notice that the company’s purported expert on plagiarism John Vincent “Vinnie” Monaco was woefully underprepared for his January deposition, ordering him to submit to 3.5 more hours of questioning that took place Wednesday.

​In granting a motion from the Chicago Tribune, New York Times and other news outlets suing OpenAI to compel the additional testimony, Wang deferred ruling on a request for sanctions, saying it would depend on how Monaco fared in his do-over. She said she may issue fines or recommend some of his answers be deemed as admissions.

​OpenAI has previously said that Monaco has more knowledge than any of its engineers about Project Giraffe, an internal operation which the company claims is designed to develop ways to limit its learning language models, or LLMs, from inadvertently regurgitating copyrighted works — the issue at the core of the ongoing Manhattan Federal Court lawsuit."

Another Court Rules Copyright Can’t Stop People From Reading and Speaking the Law; Electronic Frontier Foundation (EFF), April 8, 2026

MITCH STOLTZ, Electronic Frontier Foundation (EFF); Another Court Rules Copyright Can’t Stop People From Reading and Speaking the Law

"Another court has ruled that copyright can’t be used to keep our laws behind a paywall. The U.S. Court of Appeals for the Third Circuit upheld a lower court’s ruling that it is fair use to copy and disseminate building codes that have been incorporated into federal and state law, even though those codes are developed by private parties who claim copyright in them. The court followed the suggestions EFF and others presented in an amicus brief, and joined a growing list of courts that have placed public access to the law over private copyright holders’ desire for control."

Who owns ideas in the AI age?; Fortune, April 8, 2026

 , Fortune; Who owns ideas in the AI age?; David Shelley, CEO of Hachette’s U.K. and U.S. operations, on taking on Big Tech, defending copyright, and why the future of human creativity is at stake.

"Can you ever really own an idea?"

Wednesday, April 8, 2026

Grambling State Secures Trademark for its "G" Logo After Almost 30 Years; Ebony Magazine, April 7, 2026

STARR ROCQUE , Ebony Magazine; Grambling State Secures Trademark for its "G" Logo After Almost 30 Years

"Grambling State University secured a major win in court this month. The HBCU secured its iconic “G” logo under a US trademark. The historic logo has represented the school’s athletic excellence and pride since the 1970s. However, the process of securing the trademark, led by the Division of Administration and Business Affairs and counsel Kean Miller, had been ongoing since 1998. 

This new milestone follows a coordinated effort to address prior court refusals to grant the trademark while considering other nationally recognized “G” marks, such as those associated with the University of Georgia and the Green Bay Packers."

Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions; CNBC, April 8, 2026

Jonathan Vanian, CNBC; Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions

"Meta is debuting its first major artificial intelligence model since the costly hiring of Scale AI’s Alexandr Wang nine months ago, as the Facebook parent aims to carve out a niche in a market that’s being dominated by OpenAI, Anthropic and Google.

Dubbed Muse Spark and originally codenamed Avocado, the AI model announced Wednesday is the first from the company’s new Muse series developed by Meta Superintelligence Labs, the AI unit that Wang oversees. Wang joined Meta in June as part of the company’s $14.3 billion investment in Scale AI, where he was CEO."

Tuesday, April 7, 2026

The ‘Shy Girl’ Fiasco Shows Why Trust in Writers Is Plummeting; The New York Times, March 25, 2026

 Andrea Bartz , The New York Times; The ‘Shy Girl’ Fiasco Shows Why Trust in Writers Is Plummeting

"But as generative artificial intelligence worms its way through the publishing industry, I’m bracing for a stomach-turning query: Did you actually write this?

The worry has been at the front of my mind since last week, when Hachette canceled the forthcoming U.S. publication of the horror novel “Shy Girl after readers and journalists flagged prose that sounded like A.I. slop. (The author maintains that a freelance editor is to blame for any prose written by a large language model.)

Though I’m against the use of generative A.I. in creative writing, not everyone feels the same way. What does seem clear, however, is that most readers want disclosure when A.I. has been used, and they are quick to note the telltale rhythms and patterns of popular large language models.

But as A.I. models continue to improve, I’m concerned that it will become difficult to distinguish between something written by a human versus a bot. As more A.I.-generated writing is put out in the world, more readers will question whether the text they are poring over was penned by a human. We’re barreling toward a rapid erosion of trust between authors and readers, and the publishing industry is unprepared to deal with the consequences."

I told the internet I use AI. Boy, was it mad.; The Washington Post, April 5, 2026

  , The Washington Post; I told the internet I use AI. Boy, was it mad.

"...Many people think that using AI at any stage of the writing process amounts to outsourcing your thinking to a machine, and they reacted badly to a journalist suggesting some AI use might be all right.

Obviously, I disagree, but I recognize those folks are grappling with important questions, such as “What is writing for?” and “Which uses of AI serve those purposes, and which undermine them?”"

The New York Times drops freelance journalist who used AI to write book review; The Guardian, March 31, 2026

  , The Guardian; The New York Times drops freelance journalist who used AI to write book review

"The New York Times has cut ties with a freelance journalist after discovering he used artificial intelligence to help write a book review that echoed elements of a review of the same book in the Guardian.

It came after a New York Times reader flagged similarities between the paper’s January review of Watching Over Her by Jean-Baptiste Andrea,written by author and journalist Alex Preston, and an August review of the same book written by Christobel Kent in the Guardian.

The New York Times launched an investigation, during which Preston admitted that he had used AI to assist writing the review and did not spot the sections that were pulled from the Guardian before submitting it. In a statement to the Guardian on Tuesday, Preston said that he was “hugely embarrassed” and had “made a serious mistake”."

The Copyright Act in the age of AI; Politico, April 6, 2026

AARON MAK , Politico; The Copyright Act in the age of AI

"The Copyright Act is reaching a major milestone this year, yet some legal scholars aren’t sure how well it will hold up in the age of artificial intelligence.

Stanford University held a summit on Friday to celebrate (and fret about) the 1976 act, which is the foundation of modern copyright law, as the 50th anniversary of its signing approaches in October. Academics advanced a number of proposals to update and reinterpret American copyright law, though several also warned against stretching it too far. The consensus: AI will reshape copyright whether we like it or not, and that it’s time to grapple with the implications."

Iowa can restrict LGBTQ+ books and topics at schools, appellate court rules; Associated Press via The Guardian, April 6, 2026

 Associated Press via The Guardian; Iowa can restrict LGBTQ+ books and topics at schools, appellate court rules

Ruling, vacating lower court’s temporary block, applies to classrooms and libraries up to sixth grade 

"Iowa can enforce a law that restricts teachers from talking about LGBTQ+ topics with students in kindergarten through the sixth grade and bans some books in libraries and classrooms, an appellate court said on Monday.

The decision for now vacates a lower court judge’s temporary blocks on the law.

The measure was first approved by Republican majorities in the Iowa house and senate and the Republican governor, Kim Reynolds, in 2023, which they said reinforced age-appropriate education in kindergarten through 12th grades. It has been a back-and-forth battle in the courts in the three years since lawsuits were filed by the Iowa State Education Association, major publishing houses and bestselling authors, as well as Iowa Safe Schools, an LGBTQ+ advocacy organization."