Thursday, April 16, 2026

That Meeting You Hate May Keep A.I. From Stealing Your Job; The New York Times, April 15, 2026

 , The New York Times ; That Meeting You Hate May Keep A.I. From Stealing Your Job

"Mr. Sirk’s experience, while perhaps extreme, reflects the broader impact of A.I. in the workplace: It is vastly accelerating many of the tasks conducted by white-collar workers, and even replacing some of these tasks altogether. What it can’t automate — at least not yet — are the hard-coded requirements of bureaucracy.

With the help of A.I., white-collar workers can generate far more memos or strategy options than in the past and churn out more product prototypes or software features. But some executive still has to decide which option to greenlight. Workers can gin up many more sales pitches, but they still have to persuade clients to sign on the dotted line.

As A.I. makes the production of knowledge work more and more efficient, the job of presenting, debating, lobbying, arm-twisting, reassuring or just plain selling the work appears to be rising in importance. And the need for those sometimes messy human tasks may limit the number of people A.I. displaces.

“These were always important skills,” said David Deming, an economist who is the dean of Harvard College. “But as the information landscape becomes more saturated, the ability to tell a story out of it — to take a ton of text and turn it into something people want — is more valuable.”"

Wednesday, April 15, 2026

AI Makes Securing Copyright Protection for Software Code Tricky; Bloomberg Law, April 15, 2026

 Michael Justus, Carlton Fields, Bloomberg Law; AI Makes Securing Copyright Protection for Software Code Tricky


[Kip Currier: I recommend this brief articleLinks to an external site. in Bloomberg Law; the authors do a great job identifying AI, IP, and human and AI-related coding issues right now, such as "vibe coding". They also provide practical strategies for endeavoring to secure copyright protections for code.]


"Copyright protection for software code is being sacrificed, knowingly or not, for the speed and efficiency of AI coding.

This rapid shift in the role of humans from writing code to managing artificial intelligence tools upends traditional copyright protection strategies. Original human-written code is generally copyrightable. But AI-generated code that lacks human authorship is ineligible for copyright protection under US law.

“Vibe coding”—where humans describe a desired software program in natural language and GenAI tools write the code—is pervasive. This isn’t limited to the tech industry. Employees across industries are vibe coding software solutions, which can be valuable to employers.

Developers estimate 42% of code is AI-generated or assisted and the number was expected to increase significantly, according to an October 2025 survey.

The lack of copyright protections is a big deal...

The key is bespoke curation into a creative whole from many options."

Pittsburgh Post-Gazette Saved From Closure by Nonprofit; The New York Times, April 14, 2026

 , The New York Times; Pittsburgh Post-Gazette Saved From Closure by Nonprofit


[Kip Currier: What great news to learn that the Pittsburgh Post-Gazette will not be closing on May 3, 2026! Instead, one of America's oldest newspapers will continue through the non-profit Venetoulis Institute for Local Journalism

Newspapers are still essential sources for access to information. They also promote literacy, free expression, and informed citizenries -- crucial elements of functioning democracies.

Sadly, three print newspapers serving Northwestern Pennsylvania have ceased publication in the past two months -- Clarion News (1840), (Franklin's) The News-Herald (1886), and (Oil City's) The Derrick (1871); The Derrick is continuing as an online only publication.]


"The Pittsburgh Post-Gazette, which had been set to shut down in May, will keep publishing after all. A nonprofit journalism organization has stepped up to acquire the newspaper, which has survived for more than two centuries.

The Venetoulis Institute for Local Journalism, which runs The Baltimore Banner and is financed by the hotel magnate Stewart W. Bainum Jr., said on Tuesday that it had reached an agreement with the newspaper’s current owner, Block Communications, to buy the assets of The Post-Gazette and run it as a nonprofit. The transaction is expected to take effect on May 4, ensuring there is no gap in publishing.

The deal is a rare spot of good news for the media industry, which has endured waves of metropolitan and local newspaper closures and widening local news deserts around the country for the past two decades. A 2025 report by Northwestern University found that more than 130 papers had shut in the preceding year alone.

The Post-Gazette is one of the oldest newspapers in the United States, tracing its history back to The Pittsburgh Gazette, which was founded in 1786. It has been owned by Block Communications since 1927, and has won multiple Pulitzer Prizes. Its closure would have made Pittsburgh one of the largest metropolitan areas without a major newspaper."

Tuesday, April 14, 2026

Sam Altman home attacks spark concern over AI-motivated violence; Axios, April 14, 2026

Nadia Lopez, Axios; Sam Altman home attacks spark concern over AI-motivated violence

"The big picture: These incidents come amid heightened tension around AI's rapid development, with public anxiety over its political and economic implications rising even as companies continue to push the technology forward.

Threat level: AI is being cast in increasingly existential terms, including by its own creators. Warnings over the chaos the technology could unleash have become part of mainstream discourse, alongside promises of sweeping economic transformation.

This dual promise of disruption and progress has helped elevate AI into one of the most consequential policy debates in the world, but also one of the most emotionally charged."

< You might be suffering from AI brain fry; NPR, April 13, 2026

 NPR; You might be suffering from AI brain fry

"HERMAN: Yeah. I mean, the researchers, they describe this as basically hopping around between different tools and feeling overwhelmed. Not by just having to multi-task - which is already a problem in a lot of jobs - but by dealing with a whole bunch of output. So if you have a programming tool that can kind of run in the background and starts adding features to software really quickly, you have another tool that's constructing a report from you, it's searching the web and pulling together, you know, a market research document. You have another tool in the background that you're in a, like, constant chat with trying to refine some idea for a talk you have to give - you're just kind of getting first pulled in all these different directions, and then you're kind of spamming yourself. Like, you're just producing...

(LAUGHTER)

HERMAN: ...All of this product. And it's harder, you know, as you use more and more tools to keep track of, like, whether this output is actually relevant to your job, whether you're doing anything that you need to be doing or whether you're kind of creating new work for yourself. And so the researchers described in this survey of nearly 1,500 different people in different professions, this sensation of feeling kind of like, as they say it, fried or having, like, a brain fog, feeling kind of like mentally paralyzed by the amount of stuff that you have to keep track of and kind of check and monitor."

When Using AI Leads to “Brain Fry”; Harvard Business Review, March 5, 2026

  and, Harvard Business Review ; When Using AI Leads to “Brain Fry”

"AI promises to act as an amplifier that will drive efficiency and make work easier, but workers that are using these AI tools report that they are intensifying rather than simplifying work.

This problem is becoming more common."

Agency in the Age of AI; Time, April 14, 2026

 John Palfrey , Time; Agency in the Age of AI

"OpenAI’s recent acquisition of OpenClaw, an open-source, autonomous AI agent designed to run locally on a user’s computer, is a sign that AI agents are quickly being given more responsibilities and more access—from emails to bank accounts, a decision with unintended consequences, including deleted inboxes and Amazon Web Services outages. Peter Steinberger, the founder of OpenClaw, said he wants to “build an agent that even my mum can use.” But there is a difference between using technology to improve efficiency and giving technology agency that humans should hold. 

These developments prompt hard questions, particularly for young people who are seeking agency in their personal and professional lives. Does it make sense to train to be an actuary if AI is supposed to be good at predicting unknown outcomes based on data? Is it worth the cost today to train to be a lawyer or an accountant or pursue higher education at all when all the answers are supposedly at our fingertips? Put another way, what does agency look like in an era dominated by the spread of AI?"

AI-Generated Animation: Implement Legal Regulations to Protect Copyright Holders; The Japan News, April 13, 2026

 Editorial, The Japan News; AI-Generated Animation: Implement Legal Regulations to Protect Copyright Holders

"Regardless of the motive, it is unacceptable for third parties to edit copyrighted works — into which creators have invested their time and effort — and post them online without authorization. If this situation is left unresolved, it will undermine creators’ motivation and Japan’s content industry could be harmed.

The government should face up to the negative aspects of AI technology and seriously tackle this issue to protect copyright holders and their works."

Monday, April 13, 2026

How Merck uses patents to help maintain Keytruda’s exorbitant price; International Consortium of Investigative Journalists, April 13, 2026

 and , International Consortium of Investigative Journalists ; How Merck uses patents to help maintain Keytruda’s exorbitant price

"Merck’s original patents for Keytruda are set to expire in 2028. But Merck, using a strategy known as “evergreening,” has filed hundreds of additional patents that could protect Keytruda’s dominance well beyond that year.

ICIJ analyzed 180 U.S. patent applications related to Keytruda, provided by the Initiative for Medicines, Access & Knowledge (I-MAK), a nonprofit that examines inequities in the patent system. These were linked to 1,032 additional patent filings around the world tied to the drug. From this universe, ICIJ identified active U.S. patents that illustrate Merck’s strategy of maintaining market exclusivity.

Patents can have different status, including active, pending, abandoned, expired, or others. Explore Merck’s active U.S. Keytruda patents below."

Nobody is governing AI; Quartz, April 8, 2026


Jackie Snow, Quartz ; Nobody is governing AI

Artificial intelligence is advancing faster than lawmakers can regulate it, while global AI governance fragments in real time

"Artificial intelligence is now making hiring decisions, tutoring children, optimizing power grids, and targeting weapons systems. The rules governing any of that are, almost everywhere, either nonexistent, stalled in committee, or under active attack.

In the United States, the federal government has spent three years producing executive orders, frameworks, and guidelines, none of which have become law. States that tried to fill the gap have been threatened with funding cuts and lawsuits. In Europe, the most ambitious AI legislation in the world is being delayed or softened before most of it has even taken effect. The technology, meanwhile, has not paused for any of this."

OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash; Los Angeles Times, April 13, 2026

 Queenie Wong , Los Angeles Times; OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash

"Hours after a Molotov cocktail was thrown at his San Francisco home, OpenAI Chief Executive Sam Altman addressed the criticism surrounding artificial intelligence that appears to have been the impetus for the attack. 

In a lengthy blog post, Altman shared a family photo of his husband and child, stating he hopes it might convince people not to repeat the attack despite their opinions on him.

The San Francisco Police Department arrested a 20-year-old man in connection with the Friday morning attack but did not publicly comment on the motivation. Altman and his company, the maker of ChatGPT, have been at the center of a heated debate about whether AI will change the world for better or worse."

It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears; The Guardian, April 8, 2026

 , The Guardian; It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears

"I’ll confess: prior to this moment of giving the subject more than two seconds’ thought, my anxieties around AI were extremely localised. I thought in immediate terms of my own household income, and beyond that, of how the job market might look 10 years from now when my children graduate. I wondered if I should boycott ChatGPT, many of whose architects support Trump, and decided that, yes, I should – an easy sacrifice because I don’t use it in the first place.

Anything bigger than that seemed fanciful. Last year, when Karen Hao’s book Empire of AI was published, it laid out a case against Sam Altman and his company, OpenAI, that briefly pierced the tedium of the discourse to say that Altman’s leadership is cult-like and blind to cost – no different, in other words, to his tech predecessors, except much more dangerous. Still, I didn’t read the book.

The investigation this week in the New Yorker offers a lower-commitment on-ramp to the subject, while giving the casual reader an exciting opportunity: to ask ChatGPT, the AI-powered chatbot created by Altman’s OpenAI, to summarise the key findings of a piece that is highly critical of ChatGPT and Altman."

Sam Altman May Control Our Future—Can He Be Trusted?; The New Yorker, April 6, 2026

 and , The New Yorker; Sam Altman May Control Our Future—Can He Be Trusted?

"Not all the tendencies that make chatbots dangerous are glitches; some are by-products of how the systems are built. Large language models are trained, in part, on human feedback, and humans tend to prefer agreeable responses. Models often learn to flatter users, a tendency known as sycophancy, and will sometimes prioritize this over honesty. Models can also make things up, a tendency known as hallucination. Major A.I. labs have documented these problems, but they sometimes tolerate them. As models have grown more complex, some hallucinate with more persuasive fabrications. In 2023, shortly before his firing, Altman argued that allowing for some falsehoods can, whatever the risks, confer advantages. “If you just do the naïve thing and say, ‘Never say anything that you’re not a hundred per cent sure about,’ you can get a model to do that,” he said. “But it won’t have the magic that people like so much.”"

The three realities of AI; Axios, April 13, 2026

 Ina Fried , Axios; The three realities of AI

"Three distinct camps are forming around AI: power users, doubters and resisters.

Why it matters: AI isn't just advancing — it's fragmenting how people see the world.

The big picture: The disconnect is showing up everywhere — from job-loss fears to data center protests to actual violence.


Doubters still see AI as glitchy chatbots and viral fails. They aren't using its full capabilities.


Power users run AI agents around the clock, trading tips on how to automate work and decision-making. 


Resisters understand AI, think they know where it's headed and want no part of it."

Sunday, April 12, 2026

Is AI the greatest art heist in history?; The Guardian, April 12, 2026

 , The Guardian; Is AI the greatest art heist in history? 

New technologies of reproduction are plundering the art world – and getting away with it

"In 2026, its easy to see why generative AI is bad. The internet has nicknamed its excretions “slop”. The CEOs of AI companies prance about on stage like supervillains, bragging that their products will eliminate vast swathes of work. Generative AI requires sacrificing the world’s water to feed its hideous data centres. Around the globe, chatbots induce schizophrenic delusions and urge teens to kill themselves – all while turning users brains to mush.

Who could have predicted this? Artists, that’s who...

When tech boosters want to demonise resistance, they invoke the luddites. By their telling, the luddites were primitive idiots, who smashed machines they were too stupid to understand. History though, tells a different story. As recounted by Brian Merchant’s sublime work Blood in the Machineluddites were skilled artisans, fighting for their way of life against the “satanic mills” – textile sweatshops powered by child semi-slaves. Forbidden from unionising, luddites smashed machines as a protest tactic. And they did not lose to the inevitable march of progress. They lost to physical force. The government called in troops, and the luddites were either executed or shipped to penal colonies in Australia.

Artists too are fighting for a way of life. And if we are too disorganised to triumph, that will be everyone’s loss. AI companies’ inappropriate scraping may have started with the work of illustrators like me, but it has grown to encompass everything else. It extends to the billions of dollars that these companies squander each year, to the carbon they burn, to the rare minerals in their chips, to the land on which their data centres sit, to culture, education, sanity and our very imaginations. In return for the entirety of the human and non-human world, the tech lords can only offer us dystopia. Their fantasy future contains neither meaningful work nor real communities, just robots chattering to each other, leaving nothing for us."

Texas Considers Required Reading List for Schools, Which Includes the Bible; The New York Times, April 7, 2026

 , The New York Times; Texas Considers Required Reading List for Schools, Which Includes the Bible

"Texas education officials are considering sweeping changes to English and social studies instruction that would put readings from the Bible on a new state-required reading list for millions of public school students...

A draft of the list, proposed by the Texas Education Agency, outlines more than 200 texts, with widely recognized classics such as “The Very Hungry Caterpillar” by Eric Carle for kindergartners, “A Wrinkle in Time” by Madeleine L’Engle for seventh graders and the Rev. Dr. Martin Luther King Jr.’s “I Have a Dream” speech for eighth graders.

But it also includes passages from the Bible in middle and high school, raising questions about the separation of church and state.

second proposed list, from Will Hickman, a Republican member of the state board, would require fewer books overall and include biblical passages starting in elementary school.

Supporters say the Bible excerpts, which include the story of David and Goliath from the Old Testament and a meditation on love from First Corinthians, have important literary value...

Democratic members of the state board have criticized the list for a lack of diversity in a state where Hispanic and Black students represent a majority of public school children. The state agency’s version includes texts from Black historical figures such as Langston Hughes and Frederick Douglass, for example, but has relatively few Black and Hispanic authors overall."

As AI pushes students to reconsider majors, universities struggle to adapt; The Hill, April 12, 2026

  LEXI LONAS COCHRAN  , The Hill; As AI pushes students to reconsider majors, universities struggle to adapt

"A recent poll shows AI’s increasing role in how students decide on college majors, creating a rapidly developing situation for universities that are still struggling to determine how the technology will shape higher education. 

The Lumina Foundation-Gallup 2026 State of Higher Education survey found 47 percent of currently enrolled college students have thought about switching majors “a great deal” or a “fair amount” over AI concerns." 

The most 'ethical' AI company might also be the web's biggest freeloader; Business Insider, April 12, 2026

 , Business Insider ; The most 'ethical' AI company might also be the web's biggest freeloader

"Cloudflare's latest data offers one of the clearest snapshots yet of how AI companies consume the web, and how little they give back.

The company, which powers roughly 20% of the internet, tracks how AI bots crawl websites versus how often those platforms send users back through referrals. The resulting "crawl-to-refer" ratio is a simple yet telling metric: how much value is extracted compared to returned.

The early April 2026 figures are stark. Anthropic is the worst by a wide margin, with a ratio of 8,800 to 1. That means its bots crawl webpages 8,800 times for every referral sent...

Anthropic's position is particularly striking given its reputation for being "ethical." That reputation has made it a preferred choice among some users who want to support more responsible AI development. This data highlights a different dimension of ethics — how companies interact with the broader web ecosystem that provides information for AI model outputs."

Saturday, April 11, 2026

Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders.; The Washington Post, April 11, 2026

 

, The Washington Post ; Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders.

The artificial intelligence company asked religious leaders for guidance on building a moral chatbot.


"The company hosted about 15 Christian leaders from Catholic and Protestant churches, academia and the business world at its headquarters in late March for a two-day summit that included discussion sessions and a private dinner with senior Anthropic researchers, according to four participants who spoke with The Washington Post.


Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”"

Did AI kill my job, or open up a next chapter?; Public Source, April 10, 2026

[Kip Currier: I posted the following note and excerpt from this Public Source essay for the graduate students in my The Information Professional in Communities course this term:

I'm sharing this Pittsburgh local journalism first person essay by writer Austin Harvey, which I serendipitously came across and have posted to all of my blogs. Given the work that I currently do as a university faculty instructor, the piece raises thorny questions and considerations for me about what information centers/professionals can do to assist and/or "be there" for individuals and communities who are being displaced by AI.

Also, in what ways do academic programs like this one need to better prepare MLIS students to navigate AI-related positive and negative societal changes?

In what ways will information centers/professionals, as well as information center users, potentially be displaced by AI?

In what ways can information centers/professionals proactively adapt and/or manage this disruptive technological change?

What kinds of advocacy and actions by information professionals are required and needed?

Who are potential partners with whom information professionals can confer and collaborate on behalf of communities to strategically address present and future AI-fueled impacts?]


First-person essay by Austin Harvey, Public Source; Did AI kill my job, or open up a next chapter?

"Many writers feared that they would be the first ones to lose their jobs to AI. I did not share this fear, though I feel my heart rate spike every time I use an em-dash now — and you can pry them from my cold, dead hands when I’m gone. I saw value in human writing. I still do, and believe most people agree. We’ve gotten better at identifying AI-generated text, and while there are certainly a litany of websites out there publishing AI-generated articles, readers generally seem averse to them now. 

I was foolish to think none of this would affect me. 

I wasn’t replaced by AI. In fact, ATI’s editors made it very clear that they would never publish AI-generated articles. But AI was still a disruptive force. Search traffic fell. Google changed the rules on SEO and AdSense. We had editors quit or move on to other jobs, but we never hired anyone else to fill their positions. Our team of 12 became a team of seven, and for the better part of two years we were struggling to put out enough content to satisfy the algorithms. I was burning out constantly, still holding on to the idea that this was surely better than self-employment. 

Then, I was called into a meeting and told I was being let go at the end of January...

It wasn’t that I was replaced by AI, or that AI-generated articles were taking all of the search traffic; it was that a great number of people have stopped reading entirely, opting instead to simply ask ChatGPT or Gemini for answers to their questions. It’s an extension of the same issue that has caused many local news outlets to cease operations or cut staff."

How AI is getting better at finding security holes; NPR, April 11, 2026

, NPR; How AI is getting better at finding security holes

"In the past few months, AI models have gone from producing hallucinations to becoming effective at finding security flaws in software, according to developers who maintain widely used cyber infrastructure. Those pieces of software, among other things, power operating systems and transfer data for things connected to the internet.

While these new capabilities can help developers make software more secure, they can also be weaponized by hackers and nation states to steal information and money or disrupt critical services.

The latest development of AI's cyber capability came on Tuesday, when AI lab Anthropic announced it had developed a powerful new model the company believes could "reshape cybersecurity." It said that its latest model, Mythos Preview, was able to find "high-severity vulnerabilities, including some in every major operating system and web browser." Not only that, the model was better at coming up with ways to exploit the vulnerabilities it found, which means malicious actors can more effectively achieve their goals.

For now, the company is limiting the access to the model to around 50 select companies and organizations "in an effort to secure the world's most critical software." They're calling the collaboration Project Glasswing, naming it after a butterfly species with transparent wings.

Anthropic says the risk for misuse is so high that it has no plans to release this particular model to the general public, according to the announcement, but it will release other related models. "Our eventual goal is to enable our users to safely deploy Mythos-class models at scale," the company wrote."

Addison Rae Files Copyright Claim Against DHS for Using 'Diet Pepsi' in ICE-Related Video: 'Taylor Swift Could Never!'; International Business Times UK, April 10, 2026

, International Business Times UK; Addison Rae Files Copyright Claim Against DHS for Using 'Diet Pepsi' in ICE-Related Video: 'Taylor Swift Could Never!'

"Addison Rae has asserted her control over her creative catalogue by successfully removing her music from a government-led promotional campaign. The pop singer and social media personality recently filed a copyright claim against the Department of Homeland Security after her hit single, 'Diet Pepsi', was featured in a video produced by Immigration and Customs Enforcement (ICE).

The legal intervention resulted in the media being disabled across multiple social platforms, effectively silencing the government's use of her intellectual property. This move places Rae among a growing list of pop stars who have refused to allow their work to be associated with federal enforcement activities.

Addison Rae Takes Legal Action as DHS Disables' Diet Pepsi' Video

The controversy began when users online noticed Rae's latest track accompanying footage from the government agency. Rae moved quickly to file the claim, ensuring the content was stripped of its audio or removed entirely."

Why A Potential Supreme Court Copyright Decision Could Change The Music Business; Forbes, April 9, 2026

Bill Hochberg, Forbes; Why A Potential Supreme Court Copyright Decision Could Change The Music Business 

"There’s billions at stake for music companies and investors who may get a big hit – and not the kind they like – if the U.S. Supreme Court decides music creators and their heirs can get back their copyrights worldwide."

Friday, April 10, 2026

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters; Wired, April 9, 2026

 MAXWELL ZEFF , Wired; OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause “critical harm.”

"OPENAI IS THROWING its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage."

Amazon upsets ebook lovers by ending support for old Kindle devices; Guardian, April 9, 2026

 , The Guardian; Amazon upsets ebook lovers by ending support for old Kindle devices


Up to 2m e-readers made before 2013 will no longer be able to download new titles

"Amazon is to stop supporting older Kindle models leaving longtime ebook fans unable to access new content from the Kindle store.

Devices released during or before 2012 will no longer receive updates from 20 May, affecting owners of older Kindles, including the earliest models such as the Touch and some Fire tablets. It is thought that 2m e-readers could be affected.

Users will still be able to read ebooks they have downloaded, and their accounts and their Kindle library will remain accessible on mobile and desktop apps. Active users have been offered discounts to help “transition to newer devices”. Amazon said performing a factory reset on affected Kindles would make them unusable.

Disappointed users have vented their frustration online, including in comments on The Verge, accusing Amazon of “causing waste at a large scale” and saying their devices would be reduced to a paperweight despite still working."