Tuesday, December 23, 2025

CBS Boss’s ‘60 Minutes’ Intervention Backfires as Episode Leaks; The Daily Beast, December 23, 2025

 


William Vaillancourt , The Daily Beast ; CBS Boss’s ‘60 Minutes’ Intervention Backfires as Episode Leaks

"The 60 Minutes segment that CBS News Editor-in-Chief Bari Weiss pulled from airing on Sunday has leaked.

The segment covers the notorious CECOT prison in El Salvador, where the Trump administration sent some Venezuelan migrants. Canada’s Global TV aired the segment, according to CNN media reporter Brian Stelter, and it was later published elsewhere."

Monday, December 22, 2025

Natasha Lyonne says AI has an ethics problem because right now it’s ‘super kosher copacetic to rob freely under the auspices of acceleration’; Fortune, December 20, 2025

  , Fortune; Natasha Lyonne says AI has an ethics problem because right now it’s ‘super kosher copacetic to rob freely under the auspices of acceleration’

"Asteria partnered with Moonvalley AI, which makes AI tools for filmmakers, to create Marey, named after cinematographer Étienne-Jules Marey. The tool helps generate AI video that can be used for movies and TV, but only draws on open-license content or material it has explicit permission to use. 

Being careful about the inputs for Asteria’s AI video generation is important, Lyonne said at the Fortune Brainstorm AI conference in San Francisco last week. As AI use increases, both tech and Hollywood need to respect the work of the cast, as well as the crew and the writers behind the scenes. 

“I don’t think it’s super kosher copacetic to just kind of rob freely under the auspices of acceleration or China,” she said. 

While she hasn’t yet used AI to help make a TV show or movie, Lyonne said Asteria has used it in other small ways to develop renderings and other details.

“It’s a pretty revolutionary act that we actually do have that model and that’s you know the basis for everything that we work on,” said Lyonne.

Marey is available to the public for a credits-based subscription starting at $14.99 per month."

OpenAI, Anthropic, xAI Hit With Copyright Suit from Writers; Bloomberg Law, December 22, 2025

 Annelise Levy, Bloomberg Law; OpenAI, Anthropic, xAI Hit With Copyright Suit from Writers

"Writers including Pulitzer Prize-winning journalist John Carreyrou filed a copyright lawsuit accusing six AI giants of using pirated copies of their books to train large language models.

The complaint, filed Monday in the US District Court for the Northern District of California, claims Anthropic PBC, Google LLCOpenAI Inc.Meta Platforms Inc., xAI Corp., and Perplexity AI Inc. committed a “deliberate act of theft.”

It is the first copyright lawsuit against xAI over its training process, and the first suit brought by authors against Perplexity...

Carreyrou is among the authors who opted out of a $1.5 billion class-action settlement with Anthropic."

Sunday, December 21, 2025

Launch, Train, Settle: How Suno And Udio’s Licensing Deals Made Copyright Infringement Profitable; Forbes, December 18, 2025

Virginie Berger, Forbes; Launch, Train, Settle: How Suno And Udio’s Licensing Deals Made Copyright Infringement Profitable

"The Precedent That Pays

Perhaps most concerning is what these partial settlements teach other AI companies: copyright infringement can be a viable business strategy, as long as you only have to answer to those with the resources to sue.

The calculus is straightforward. Build your product using copyrighted material without permission. Grow quickly while competitors who might try to license properly struggle with costs and complexity.

If you get big enough, those with sufficient resources will eventually sue. At that point, negotiate from strength because your technology is already deployed, your users are already dependent on it, and dismantling what you've built would be costly.

The worst case isn't court-ordered damages or shutdown anymore but will be a licensing deal where you finally pay something. But far less than you would have paid to license properly from the start, and only to the major players who could force you to the table. And you keep operating with legitimacy.

Both Suno and Udio can now market themselves as "responsibly licensed" platforms, pointing to their deals with major labels as proof of legitimacy. The narrative shifts from "they stole content to build this" to "they're innovative partners in the future of music.""

Notre Dame receives $50 million grant from Lilly Endowment for the DELTA Network, a faith-based approach to AI ethics; Notre Dame News, December 19, 2025

Carrie Gates and Laura Moran Walton, Notre Dame News ; Notre Dame receives $50 million grant from Lilly Endowment for the DELTA Network, a faith-based approach to AI ethics

"The University of Notre Dame has been awarded a $50.8 million grant from Lilly Endowment Inc. to support the DELTA Network: Faith-Based Ethical Formation for a World of Powerful AI. Led by the Notre Dame Institute for Ethics and the Common Good(ECG), this grant — the largest awarded to Notre Dame by a private foundation in the University’s history — will fund the further development of a shared, faith-based ethical framework that scholars, religious leaders, tech leaders, teachers, journalists, young people and the broader public can draw upon to discern appropriate uses of artificial intelligence, or AI.

The grant will also support the establishment of a robust, interconnected network that will provide practical resources to help navigate challenges posed by rapidly developing AI. Based on principles and values from Christian traditions, the framework is designed to be accessible to people of all faith perspectives.

“We are deeply grateful to Lilly Endowment for its generous support of this critically important initiative,” said University President Rev. Robert A. Dowd, C.S.C. “Pope Leo XIV calls for us all to work to ensure that AI is ‘intelligent, relational and guided by love,’ reflecting the design of God the Creator. As a Catholic university that seeks to promote human flourishing, Notre Dame is well-positioned to build bridges between religious leaders and educators, and those creating and using new technologies, so that they might together explore the moral and ethical questions associated with AI.”

Australian culture, resources and democracy for $4,300 a year? Thanks for the offer, tech bros, but no thanks; The Guardian, December 15, 2025

 , The Guardian; Australian culture, resources and democracy for $4,300 a year? Thanks for the offer, tech bros, but no thanks

"According to the Tech Council, AI will deliver $115bn in annual productivity (or about $4,300 per person), rubbery figures generated by industry-commissioned research based on estimates on hours saved with no regard for jobs lost, the distribution of the promised dividend benefit or how the profits will flow.

In return for this ill-defined bounty, Farquhar says our government will need to allow the tech industry to do three things: build a data and text mining exemption to copyright law, rapidly scale data centre infrastructure and allow foreign companies to use these centres without regard for local laws. This is a proposition that demands closer scrutiny.

The use of copyrighted content to train AI has been a burning issue since 2023 when a massive data dredge saw more than 190,000 authors (including me) have our works plundered without our consent to train AI. Musicians and artists too have had their work scraped and repurposed.

This theft has been critical in training the large language models to portray something approaching empathy. It has also allowed paid users to take this stolen content and ape creators, devaluing and diminishing their work in the process. Nick Cave has described this as “replication as travesty”, noting “songs arise out of suffering … data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing.”

The sense of grievance among creators over the erasure of culture is wide and deep. A wave of creators from Peter Garrett to Tina Arena, Anna Funderand Trent Dalton have determined this is the moment to take a stand.

It is not just the performers; journalists, academics, voiceover and visual artists are all being replaced by shittier but cheaper automated products built on the theft of their labour, undermining the integrity of their work and will ultimately take their jobs.

Like fossil fuels, what is being extracted and consumed is the sum of our accumulated history. It goes from metaphor to literal when it comes to the second plank of Farquhar’s pitch: massive spending on industrial infrastructure to accommodate AI.

This imperative to power AI is the justification used by Donald Trump to recharge the mining of fossil fuels, while the industry is beating the “modular nuclear” drum for a cleaner AI revolution. Meanwhile, the OpenAI CEO, Sam Altman, is reassuring us that we don’t need to stress because AI will solve climate change anyway!

The third and final element of Farquhar’s pitch is probably its most revealing. If Australia wants to build this AI nirvana, foreign nations should be given diplomatic immunity for the data centres built and operated here. This quaint notion of the “data embassy” overriding national sovereignty reinforces a growing sense that the tech sector is moving beyond the idea of the nation state governing corporations to that of a modern imperial power.

That’s the premise of Karen Hao’s book The Empire of AI, which chronicles the rise of OpenAI and the choices it made to trade off safety and the public good in pursuit of scale and profit."

Proposal to allow use of Australian copyrighted material to train AI abandoned after backlash; The Guardian, December 19, 2025

 , The Guardian; Proposal to allow use of Australian copyrighted material to train AI abandoned after backlash

"The Productivity Commission has abandoned a proposal to allow tech companies to mine copyrighted material to train artificial intelligence models, after a fierce backlash from the creative industries.

Instead, the government’s top economic advisory body recommended the government wait three years before deciding whether to establish an independent review of Australian copyright settings and the impact of the disruptive new technology...

In its interim report on the digital economy, the commission floated the idea of granting a “fair dealing” exemption to copyright rules that would allow AI companies to mine data and text to develop their large language models...

The furious response from creative industries to the commission’s idea included music industry bodies saying it would “legitimise digital piracy under guise of productivity”."

Friday, December 19, 2025

Fair Use is a Right. Ignoring It Has Consequences.; Electronic Frontier Foundation (EFF), December 18, 2025

MITCH STOLTZ , Electronic Frontier Foundation (EFF); Fair Use is a Right. Ignoring It Has Consequences.

"Fair use is not just an excuse to copy—it’s a pillar of online speech protection, and disregarding it in order to lash out at a critic should have serious consequences. That’s what we told a federal court in Channel 781 News v. Waltham Community Access Corporation, our case fighting copyright abuse on behalf of citizen journalists."

Thursday, December 18, 2025

January 1, 2026 is Public Domain Day: Works from 1930 are open to all, as are sound recordings from 1925!; Center for the Study of the Public Domain, December 2025

Jennifer Jenkins and James Boyle, Center for the Study of the Public Domain; January 1, 2026 is Public Domain Day: Works from 1930 are open to all, as are sound recordings from 1925!

"CC BY 4.0

Please note that this site is only about US law; the copyright terms in other countries are different.[2]

On January 1, 2026, thousands of copyrighted works from 1930 enter the US public domain, along with sound recordings from 1925. They will be free for all to copy, share, and build upon.[3] The literary highlights range from William Faulkner’s As I Lay Dying to Agatha Christie’s The Murder at the Vicarage and the first four Nancy Drew novels. From cartoons and comic strips, the characters Betty Boop, Pluto (originally named Rover), and Blondie and Dagwood made their first appearances. Films from the year featured Marlene Dietrich, Greta Garbo, the Marx Brothers, and John Wayne in his first leading role. Among the public domain compositions are I Got RhythmGeorgia on My Mind, and Dream a Little Dream of Me. We are also celebrating paintings from Piet Mondrian and Paul Klee. Below you can find lists of some of the most notable bookscharacters, comics, and cartoonsfilmssongssound recordings, and art entering the public domain.[4] After each of them, we have provided an analysis of their significance. At the end of the article, we explain:

Why all of this matters
How do copyright and trademark law apply to characters?
What is the impact of the long copyright term?
What are the basic rules for determining whether something is public domain?
Conclusion"

Tuesday, December 16, 2025

, IGN; Copyright Takedowns Issued After What Looks to Be a Leaked Avengers: Doomsday Trailer Posted Online, December 16, 2025

, IGN; Copyright Takedowns Issued After What Looks to Be a Leaked Avengers: Doomsday Trailer Posted Online

 "Copyright takedowns are targeting what appears to be a leaked version of Marvel's first Avengers: Doomsday trailer, which is now being snapped out of existence across social media.

Uploads of the video file began yesterday, and were initially easy to find across social media. But today, copyright notices have replaced numerous instances of the file on X and reddit, and the original video is nowhere to be seen YouTube."

The Architects of AI Are TIME’s 2025 Person of the Year; Time, December 11, 2025

"For decades, humankind steeled itself for the rise of thinking machines. As we marveled at their ability to beat chess champions and predict protein structures, we also recoiled from their inherent uncanniness, not to mention the threats to our sense of humanity. Leaders striving to develop the technology, including Sam Altman and Elon Musk, warned that the pursuit of its powers could create unforeseen catastrophe.

This year, the debate about how to wield AI responsibly gave way to a sprint to deploy it as fast as possible. “Every industry needs it, every company uses it, and every nation needs to build it,” Huang tells TIME in a 75-minute interview in November, two days after announcing that Nvidia, the world’s first $5 trillion company, had once again smashed Wall Street’s earnings expectations. “This is the single most impactful technology of our time.” OpenAI’s ChatGPT, which at launch was the fastest-growing consumer app of all time, has surpassed 800 million weekly users. AI wrote millions of lines of code, aided lab scientists, generated viral songs, and spurred companies to re-examine their strategies or risk obsolescence. (OpenAI and TIME have a licensing and technology agreement that allows OpenAI to access TIME’s archives.)...

This is the story of how AI changed our world in 2025, in new and exciting and sometimes frightening ways. It is the story of how Huang and other tech titans grabbed the wheel of history, developing technology and making decisions that are reshaping the information landscape, the climate, and our livelihoods. Racing both beside and against each other, they placed multibillion-dollar bets on one of the biggest physical infrastructure projects of all time. They reoriented government policy, altered geopolitical rivalries, and brought robots into homes. AI emerged as arguably the most consequential tool in great-power competition since the advent of nuclear weapons."

Google AI summaries are ruining the livelihoods of recipe writers: ‘It’s an extinction event’; The Guardian, December 15, 2025

 , The Guardian; Google AI summaries are ruining the livelihoods of recipe writers: ‘It’s an extinction event’

"This past March, when Google began rolling out its AI Mode search capability, it began offering AI-generated recipes. The recipes were not all that intelligent. The AI had taken elements of similar recipes from multiple creators and Frankensteined them into something barely recognizable. In one memorable case, the Google AI failed to distinguish the satirical website the Onion from legitimate recipe sites and advised users to cook with non-toxic glue.

Over the past few years, bloggers who have not secured their sites behind a paywall have seen their carefully developed and tested recipes show up, often without attribution and in a bastardized form, in ChatGPT replies. They have seen dumbed-down versions of their recipes in AI-assembled cookbooks available for digital downloads on Etsy or on AI-built websites that bear a superficial resemblance to an old-school human-written blog. Their photos and videos, meanwhile, are repurposed in Facebook posts and Pinterest pins that link back to this digital slop.

Recipe writers have no legal recourse because recipes generally are not copyrightable. Although copyright protects published or recorded work, they do not cover sets of instructions (although it can apply to the particular wording of those instructions).

Without this essential IP, many food bloggers earn their living by offering their work for free while using ads to make money. But now they fear that casual users who rely on search engines or social media to find a recipe for dinner will conflate their work with AI slop and stop trusting online recipe sites altogether."

Monday, December 15, 2025

Kinds of Intelligence | LJ Directors’ Summit 2025; Library Journal, December 2, 2025

 Lisa Peet, Library Journal; Kinds of Intelligence | LJ Directors’ Summit 2025

"LJ’s 2025 Directors’ Summit looked at artificial—and very real—intelligence from multiple angles

If there was any doubt about what issues are on the minds of today’s library leaders, Library Journal’s 2025 Directors’ Summit, held October 16 and 17 at Denver Public Library (DPL), had some ready answers: AI and people.

Nick Tanzi hit both notes handily in his keynote, “Getting Your Public Library AI-Ready.” Tanzi, assistant director of South Huntington Public Library (SHPL), NY, and technology consultant at The-Digital-Librarian.com (and a 2025 LJ Mover & Shaker), began with a reminder of other at-the-time “disruptive” technologies, starting with a 1994 clip of Today Show anchors first encountering “@” and “.com.”

During most of this digital change, he noted, libraries had the technologies before many patrons and could lead the way. Now everyone has access to some form of AI, but it’s poorly understood. And access without understanding is a staff problem as well as a patron problem.

So, what does it mean for a library to be AI-ready? Start with policy and training, said Tanzi, and then translate that to public services, rather than the other way around. Library policies need to be AI-proofed, beginning by looking at what’s already in place and where it might be stressed by AI: policies governing collection development, reconsideration of materials, tool use, access control, the library’s editorial process, and confidential data. Staff are already using some form of AI at work—do they have organizational guidance?

Tanzi advised fostering AI literacy across the library. At SHPL, he formed an AI user group; it has no prerequisite for participation and staff are paid for their time. Members explore new tools, discuss best practices, complete “homework,” and share feedback, which also allows Tanzi to stress-test policies. It’s not a replacement for formal training, but helps him discover which tools work best in various departments and speeds up learning.

We need to demystify AI tools for staff and patrons, Tanzi noted, and teach ethics around them. Your ultimate goal is to create informed citizens; libraries can build community around AI education, partnering with the local school district, colleges, and government."

Government's AI consultation finds just 3% support copyright exception; The Bookseller, December 15, 2025

MAIA SNOW, The Bookseller ; Government's AI consultation finds just 3% support copyright exception

"The initial results of the consultation found that the majority of respondents (88%) backed licences being required in all cases where data was being used for AI training. Just 3% of respondents supported the government’s preferred options, which would allow data mining by AI companies and require rights holders to opt-out."

Chasing the Mirage of “Ethical” AI; The MIT Press Reader, December 2025

De Kai, The MIT Press Reader; Chasing the Mirage of “Ethical” AI

"Artificial intelligence poses many threats to the world, but the most critical existential danger lies in the convergence of two AI-powered phenomena: hyperpolarization accompanied by hyperweaponization. Alarmingly, AI is accelerating hyperpolarization while simultaneously enabling hyperweaponization by democratizing weapons of mass destruction (WMDs).

For the first time in human history, lethal drones can be constructed with over-the-counter parts. This means anyone can make killer squadrons of AI-based weapons that fit in the palm of a hand. Worse yet, the AI in computational biology has made genetically engineered bioweapons a living room technology.

How do we handle such a polarized era when anyone, in their antagonism or despair, can run down to the homebuilder’s store and buy all they need to assemble a remote-operated or fully autonomous WMD?

It’s not the AI overlords destroying humanity that we need to worry about so much as a hyperpolarized, hyperweaponized humanity destroying humanity.

To survive this latest evolutionary challenge, we must address the problem of nurturing our artificial influencers. Nurturing them to be ethical and responsible enough not to be mindlessly driving societal polarization straight into Armageddon. Nurturing them so they can nurture us.

But is it possible to ensure such ethical AIs? How can we accomplish this?"

Sunday, December 14, 2025

I called my recipe book Sabzi – vegetables. But the name was trademarked. And my legal ordeal began; The Guardian, December 4, 2025

, The Guardian ; I called my recipe book Sabzi – vegetables. But the name was trademarked. And my legal ordeal began

"Vegetables, in my experience, rarely cause controversy. Yet last month I found myself in the middle of a legal storm over who gets to own the word sabzi – the Hindi, Urdu, Punjabi, Persian, Dari and Pashto word for cooked veg or fresh greens. It was a story as absurd as it was stressful, a chain of delis threatened me with legal action over the title of a book I had spent years creating. But what began as a personal legal headache soon morphed into something bigger, a story about how power and privilege still dominate conversations about cultural ownership in the UK.

When the email first landed in my inbox, I assumed it must be a wind-up. My editor at Bloomsbury had forwarded a solicitor’s letter addressed to me personally, care of my publishers. As I read it, my stomach dropped. A deli owner from Cornwall accused me of infringing her intellectual property over my cookbook Sabzi: Fresh Vegetarian Recipes for Every Day. Why? Because in 2022, she had trademarked the word sabzi to use for her business and any future products, including a cookbook she hoped to write one day.

My jaw clenched as I pored over pages of legal documentation, written in the punitive and aggressive tone of a firm gearing up for a fight. I was accused of “misrepresentation” (copying the deli’s brand), damaging its business and affecting its future growth, and they demanded detailed commercial reports about my work, including sales revenue, stock numbers and distribution contracts – information so intrusive that it felt like an audit. Buried in the legal jargon was a line that shook me. They reserved the right to seek the “destruction” of all items relating to their infringement claim. Reading the threat of my book being pulped was nothing short of devastating. It was also utterly enraging.

Because sabzi isn’t some cute exotic brand name, it’s part of the daily lexicon of more than a billion people across cultures and borders. In south Asia, it simply means cooked vegetables."

Publisher under fire after ‘fake’ citations found in AI ethics guide; The Times, December 14, 2025

 Rhys Blakely, The Times ; Publisher under fire after ‘fake’ citations found in AI ethics guide

"One of the world’s largest academic publishers is selling a book on the ethics of AI intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.

Academic publishing has recently been subject to criticism for accepting fraudulent papers produced using AI, which have made it through a peer-review process designed to guarantee high standards.

The Times found that a book recently published by the German-British publishing giant Springer Nature includes dozens of citations that appear to have been invented — a sign, often, of AI-generated material."