Tuesday, December 16, 2025

The Architects of AI Are TIME’s 2025 Person of the Year; Time, December 11, 2025

"For decades, humankind steeled itself for the rise of thinking machines. As we marveled at their ability to beat chess champions and predict protein structures, we also recoiled from their inherent uncanniness, not to mention the threats to our sense of humanity. Leaders striving to develop the technology, including Sam Altman and Elon Musk, warned that the pursuit of its powers could create unforeseen catastrophe.

This year, the debate about how to wield AI responsibly gave way to a sprint to deploy it as fast as possible. “Every industry needs it, every company uses it, and every nation needs to build it,” Huang tells TIME in a 75-minute interview in November, two days after announcing that Nvidia, the world’s first $5 trillion company, had once again smashed Wall Street’s earnings expectations. “This is the single most impactful technology of our time.” OpenAI’s ChatGPT, which at launch was the fastest-growing consumer app of all time, has surpassed 800 million weekly users. AI wrote millions of lines of code, aided lab scientists, generated viral songs, and spurred companies to re-examine their strategies or risk obsolescence. (OpenAI and TIME have a licensing and technology agreement that allows OpenAI to access TIME’s archives.)...

This is the story of how AI changed our world in 2025, in new and exciting and sometimes frightening ways. It is the story of how Huang and other tech titans grabbed the wheel of history, developing technology and making decisions that are reshaping the information landscape, the climate, and our livelihoods. Racing both beside and against each other, they placed multibillion-dollar bets on one of the biggest physical infrastructure projects of all time. They reoriented government policy, altered geopolitical rivalries, and brought robots into homes. AI emerged as arguably the most consequential tool in great-power competition since the advent of nuclear weapons."

Google AI summaries are ruining the livelihoods of recipe writers: ‘It’s an extinction event’; The Guardian, December 15, 2025

 , The Guardian; Google AI summaries are ruining the livelihoods of recipe writers: ‘It’s an extinction event’

"This past March, when Google began rolling out its AI Mode search capability, it began offering AI-generated recipes. The recipes were not all that intelligent. The AI had taken elements of similar recipes from multiple creators and Frankensteined them into something barely recognizable. In one memorable case, the Google AI failed to distinguish the satirical website the Onion from legitimate recipe sites and advised users to cook with non-toxic glue.

Over the past few years, bloggers who have not secured their sites behind a paywall have seen their carefully developed and tested recipes show up, often without attribution and in a bastardized form, in ChatGPT replies. They have seen dumbed-down versions of their recipes in AI-assembled cookbooks available for digital downloads on Etsy or on AI-built websites that bear a superficial resemblance to an old-school human-written blog. Their photos and videos, meanwhile, are repurposed in Facebook posts and Pinterest pins that link back to this digital slop.

Recipe writers have no legal recourse because recipes generally are not copyrightable. Although copyright protects published or recorded work, they do not cover sets of instructions (although it can apply to the particular wording of those instructions).

Without this essential IP, many food bloggers earn their living by offering their work for free while using ads to make money. But now they fear that casual users who rely on search engines or social media to find a recipe for dinner will conflate their work with AI slop and stop trusting online recipe sites altogether."

Monday, December 15, 2025

Kinds of Intelligence | LJ Directors’ Summit 2025; Library Journal, December 2, 2025

 Lisa Peet, Library Journal; Kinds of Intelligence | LJ Directors’ Summit 2025

"LJ’s 2025 Directors’ Summit looked at artificial—and very real—intelligence from multiple angles

If there was any doubt about what issues are on the minds of today’s library leaders, Library Journal’s 2025 Directors’ Summit, held October 16 and 17 at Denver Public Library (DPL), had some ready answers: AI and people.

Nick Tanzi hit both notes handily in his keynote, “Getting Your Public Library AI-Ready.” Tanzi, assistant director of South Huntington Public Library (SHPL), NY, and technology consultant at The-Digital-Librarian.com (and a 2025 LJ Mover & Shaker), began with a reminder of other at-the-time “disruptive” technologies, starting with a 1994 clip of Today Show anchors first encountering “@” and “.com.”

During most of this digital change, he noted, libraries had the technologies before many patrons and could lead the way. Now everyone has access to some form of AI, but it’s poorly understood. And access without understanding is a staff problem as well as a patron problem.

So, what does it mean for a library to be AI-ready? Start with policy and training, said Tanzi, and then translate that to public services, rather than the other way around. Library policies need to be AI-proofed, beginning by looking at what’s already in place and where it might be stressed by AI: policies governing collection development, reconsideration of materials, tool use, access control, the library’s editorial process, and confidential data. Staff are already using some form of AI at work—do they have organizational guidance?

Tanzi advised fostering AI literacy across the library. At SHPL, he formed an AI user group; it has no prerequisite for participation and staff are paid for their time. Members explore new tools, discuss best practices, complete “homework,” and share feedback, which also allows Tanzi to stress-test policies. It’s not a replacement for formal training, but helps him discover which tools work best in various departments and speeds up learning.

We need to demystify AI tools for staff and patrons, Tanzi noted, and teach ethics around them. Your ultimate goal is to create informed citizens; libraries can build community around AI education, partnering with the local school district, colleges, and government."

Government's AI consultation finds just 3% support copyright exception; The Bookseller, December 15, 2025

MAIA SNOW, The Bookseller ; Government's AI consultation finds just 3% support copyright exception

"The initial results of the consultation found that the majority of respondents (88%) backed licences being required in all cases where data was being used for AI training. Just 3% of respondents supported the government’s preferred options, which would allow data mining by AI companies and require rights holders to opt-out."

Chasing the Mirage of “Ethical” AI; The MIT Press Reader, December 2025

De Kai, The MIT Press Reader; Chasing the Mirage of “Ethical” AI

"Artificial intelligence poses many threats to the world, but the most critical existential danger lies in the convergence of two AI-powered phenomena: hyperpolarization accompanied by hyperweaponization. Alarmingly, AI is accelerating hyperpolarization while simultaneously enabling hyperweaponization by democratizing weapons of mass destruction (WMDs).

For the first time in human history, lethal drones can be constructed with over-the-counter parts. This means anyone can make killer squadrons of AI-based weapons that fit in the palm of a hand. Worse yet, the AI in computational biology has made genetically engineered bioweapons a living room technology.

How do we handle such a polarized era when anyone, in their antagonism or despair, can run down to the homebuilder’s store and buy all they need to assemble a remote-operated or fully autonomous WMD?

It’s not the AI overlords destroying humanity that we need to worry about so much as a hyperpolarized, hyperweaponized humanity destroying humanity.

To survive this latest evolutionary challenge, we must address the problem of nurturing our artificial influencers. Nurturing them to be ethical and responsible enough not to be mindlessly driving societal polarization straight into Armageddon. Nurturing them so they can nurture us.

But is it possible to ensure such ethical AIs? How can we accomplish this?"

Sunday, December 14, 2025

I called my recipe book Sabzi – vegetables. But the name was trademarked. And my legal ordeal began; The Guardian, December 4, 2025

, The Guardian ; I called my recipe book Sabzi – vegetables. But the name was trademarked. And my legal ordeal began

"Vegetables, in my experience, rarely cause controversy. Yet last month I found myself in the middle of a legal storm over who gets to own the word sabzi – the Hindi, Urdu, Punjabi, Persian, Dari and Pashto word for cooked veg or fresh greens. It was a story as absurd as it was stressful, a chain of delis threatened me with legal action over the title of a book I had spent years creating. But what began as a personal legal headache soon morphed into something bigger, a story about how power and privilege still dominate conversations about cultural ownership in the UK.

When the email first landed in my inbox, I assumed it must be a wind-up. My editor at Bloomsbury had forwarded a solicitor’s letter addressed to me personally, care of my publishers. As I read it, my stomach dropped. A deli owner from Cornwall accused me of infringing her intellectual property over my cookbook Sabzi: Fresh Vegetarian Recipes for Every Day. Why? Because in 2022, she had trademarked the word sabzi to use for her business and any future products, including a cookbook she hoped to write one day.

My jaw clenched as I pored over pages of legal documentation, written in the punitive and aggressive tone of a firm gearing up for a fight. I was accused of “misrepresentation” (copying the deli’s brand), damaging its business and affecting its future growth, and they demanded detailed commercial reports about my work, including sales revenue, stock numbers and distribution contracts – information so intrusive that it felt like an audit. Buried in the legal jargon was a line that shook me. They reserved the right to seek the “destruction” of all items relating to their infringement claim. Reading the threat of my book being pulped was nothing short of devastating. It was also utterly enraging.

Because sabzi isn’t some cute exotic brand name, it’s part of the daily lexicon of more than a billion people across cultures and borders. In south Asia, it simply means cooked vegetables."

Publisher under fire after ‘fake’ citations found in AI ethics guide; The Times, December 14, 2025

 Rhys Blakely, The Times ; Publisher under fire after ‘fake’ citations found in AI ethics guide

"One of the world’s largest academic publishers is selling a book on the ethics of AI intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.

Academic publishing has recently been subject to criticism for accepting fraudulent papers produced using AI, which have made it through a peer-review process designed to guarantee high standards.

The Times found that a book recently published by the German-British publishing giant Springer Nature includes dozens of citations that appear to have been invented — a sign, often, of AI-generated material."

(Podcast) The Briefing: What Is Fair Use and Why Does It Matter? (Featured); JDSupra, December 5, 2025

Richard Buckley, Jr. and Scott Hervey, JDSupra ; (Podcast) The Briefing: What Is Fair Use and Why Does It Matter? (Featured)

"Creators, beware: just because it’s online doesn’t mean it’s fair game. In this episode of The Briefing, Scott Hervey and Richard Buckley break down one of the most misunderstood areas of copyright law—fair use.

In this featured episode, they cover:

- What makes a use “transformative”?

- Why credit alone doesn’t protect you

- How recent court rulings (Warhol v. Goldsmith) are changing the game

- Tips to stay on the right side of the law"

The Disney-OpenAI tie-up has huge implications for intellectual property; Fast Company, December 11, 2025

CHRIS STOKEL-WALKER, Fast Company ; The Disney-OpenAI tie-up has huge implications for intellectual property

"Walt Disney and OpenAI make for very odd bedfellows: The former is one of the most-recognized brands among children under the age of 18. The near-$200 billion company’s value has been derived from more than a century of aggressive safeguarding of its intellectual property and keeping the magic alive among innocent children.

OpenAI, which celebrated its first decade of existence this week, is best known for upending creativity, the economy, and society with its flagship product, ChatGPT. And in the last two months, it has said it wants to get to a place where its adult users can use its tech to create erotica.

So what the hell should we make of a just-announced deal between the two that will allow ChatGPT and Sora users to create images and videos of more than 200 characters, from Mickey and Minnie Mouse to the Mandalorian, starting from early 2026?"


Saturday, December 13, 2025

Authors Ask to Update Meta AI Copyright Suit With Torrent Claim; Bloomberg Law, December 12, 2025

, Bloomberg Law; Authors Ask to Update Meta AI Copyright Suit With Torrent Claim

"Authors in a putative class action copyright suit against Meta Platforms Inc. asked a federal judge for permission to amend their complaint to add a claim over Meta’s use of peer-to-peer file-sharing unveiled in discovery."

Friday, December 12, 2025

The Disney-OpenAI Deal Redefines the AI Copyright War; Wired, December 11, 2025

BRIAN BARRETT, Wired; The Disney-OpenAI Deal Redefines the AI Copyright War

 "“I think that AI companies and copyright holders are beginning to understand and become reconciled to the fact that neither side is going to score an absolute victory,” says Matthew Sag, a professor of law and artificial intelligence at Emory University. While many of these cases are still working their way through the courts, so far it seems like model inputs—the training data that these models learn from—are covered by fair use. But this deal is about outputs—what the model returns based on your prompt—where IP owners like Disney have a much stronger case

Coming to an output agreement resolves a host of messy, potentially unsolvable issues. Even if a company tells an AI model not to produce, say, Elsa at a Wendy’s drive-through, the model might know enough about Elsa to do so anyway—or a user might be able to prompt their way into making Elsa without asking for the character by name. It’s a tension that legal scholars call the “Snoopy problem,” but in this case you might as well call it the Disney problem.

“Faced with this increasingly clear reality, it makes sense for consumer-facing AI companies and entertainment giants like Disney to think about licensing arrangements,” says Sag."

Disney's deal with OpenAI is about controlling the future of copyright; engadget, December 11, 2025

Igor Bonifacic, engadget; Disney's deal with OpenAI is about controlling the future of copyright

"The agreement brings together two parties with very different public stances on copyright. Before OpenAI released Sora, the company reportedly notified studios and talent agencies they would need to opt out of having their work appear in the new app. The company later backtracked on this stance. Before that, OpenAI admitted, in a regulatory filing, it would be "impossible to train today's leading AI models without using copyrighted materials."

By contrast, Disney takes copyright law very seriously. In fact, you could argue no other company has done more to shape US copyright law than Disney. For example, there's the Sonny Bono Copyright Term Extension Act, which is more derisively known as the Mickey Mouse Protection Act. The law effectively froze the advancement of the public domain in the United States, with Disney being the greatest beneficiary. It was only last year that the company's copyright for Steamboat Willie expired, 95 years after Walt Disney first created the iconic cartoon."

Thursday, December 11, 2025

AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures; International Business Times, December 11, 2025

Lisa Parlagreco, International Business Times; AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures

"When segments of our profession begin treating AI outputs as inherently reliable, we normalize a lower threshold of scrutiny, and the law cannot function on lowered standards. The justice system depends on precision, on careful reading, on the willingness to challenge assumptions rather than accept the quickest answer. If lawyers become comfortable skipping that intellectual step, even once, we begin to erode the habits that make rigorous advocacy possible. The harm is not just procedural; it's generational. New lawyers watch what experienced lawyers do, not what they say, and if they see shortcuts rewarded rather than corrected, that becomes the new baseline.

This is not to suggest that AI has no place in law. When used responsibly, with human oversight, it can be a powerful tool. Legal teams are successfully incorporating AI into tasks like document review, contract analysis, and litigation preparation. In complex cases with tens of thousands of documents, AI has helped accelerate discovery and flag issues that humans might overlook. In academia as well, AI has shown promise in grading essays and providing feedback that can help educate the next generation of lawyers, but again, under human supervision.

The key distinction is between augmentation and automation. We must not be naive about what AI represents. It is not a lawyer. It doesn't hold professional responsibility. It doesn't understand nuance, ethics, or the weight of a client's freedom or financial well-being. It generates outputs based on patterns and statistical likelihoods. That's incredibly useful for ideation, summarization, and efficiency, but it is fundamentally unsuited to replace human reasoning.

To ignore this reality is to surrender the core values of our profession. Lawyers are trained not just to know the law but to apply it with judgment, integrity, and a commitment to truth. Practices that depend on AI without meaningful human oversight communicate a lack of diligence and care. They weaken public trust in our profession at a time when that trust matters more than ever.

We should also be thinking about how we prepare future lawyers. Law schools and firms must lead by example, teaching students not just how to use AI, but how to question it. They must emphasize that AI outputs require verification, context, and critical thinking. AI should supplement legal education, not substitute it. The work of a lawyer begins long before generating a draft; it begins with curiosity, skepticism, and the courage to ask the right questions.

And yes, regulation has its place. Many courts and bar associations are already developing guidelines for the responsible use of AI. These frameworks encourage transparency, require lawyers to verify any AI-assisted research, and emphasize the ethical obligations that cannot be delegated to a machine. That's progress, but it needs broader adoption and consistent enforcement.

At the end of the day, technology should push us forward, not backward. AI can make our work more efficient, but it cannot, and should not, replace our judgment. The lawyer who delegates their thinking to an algorithm risks their profession, their client's case, and the integrity of the justice system itself."

Trump Says Chips Ahoy to Xi Jinping; Wall Street Journal, December 10, 2025

The Editorial Board, Wall Street Journal; Trump Says Chips Ahoy to Xi Jinping

"President Trump said this week he will let Nvidia sell its H200 chip to China in return for the U.S. Treasury getting a 25% cut of the sales. The Indians struck a better deal when they sold Manhattan to the Dutch. Why would the President give away one of America’s chief technological advantages to an adversary and its chief economic competitor?"

Trump Signs Executive Order to Neuter State A.I. Laws; The New York Times, December 11, 2025

 , The New York Times; Trump Signs Executive Order to Neuter State A.I. Laws

"President Trump signed an executive order on Thursday that aims to neuter state laws that place limits on the artificial intelligence industry, a win for tech companies that have lobbied against regulation of the booming technology.

Mr. Trump, who has said it is important for America to dominate A.I., has criticized the state laws for generating a confusing patchwork of regulations. He said his order would create one federal regulatory framework that would override the state laws, and added that it was critical to keep the United States ahead of China in a battle for leadership on the technology."

Banning AI Regulation Would Be a Disaster; The Atlantic, December 11, 2025

 Chuck Hagel, The Atlantic; Banning AI Regulation Would Be a Disaster

"On Monday, Donald Trump announced on Truth Social that he would soon sign an executive order prohibiting states from regulating AI...

The greatest challenges facing the United States do not come from overregulation but from deploying ever more powerful AI systems without minimum requirements for safety and transparency...

Contrary to the narrative promoted by a small number of dominant firms, regulation does not have to slow innovation. Clear rules would foster growth by hardening systems against attack, reducing misuse, and ensuring that the models integrated into defense systems and public-facing platforms are robust and secure before deployment at scale.

Critics of oversight are correct that a patchwork of poorly designed laws can impede that mission. But they miss two essential points. First, competitive AI policy cannot be cordoned off from the broader systems that shape U.S. stability and resilience...

Second, states remain the country’s most effective laboratories for developing and refining policy on complex, fast-moving technologies, especially in the persistent vacuum of federal action...

The solution to AI’s risks is not to dismantle oversight but to design the right oversight. American leadership in artificial intelligence will not be secured by weakening the few guardrails that exist. It will be secured the same way we have protected every crucial technology touching the safety, stability, and credibility of the nation: with serious rules built to withstand real adversaries operating in the real world. The United States should not be lobbied out of protecting its own future."

Disney says Google AI infringes copyright “on a massive scale”; Ars Technica, December 11, 2025

RYAN WHITWAM , Ars Technica; Disney says Google AI infringes copyright “on a massive scale”

"Disney has sent a cease and desist to Google, alleging the company’s AI tools are infringing Disney’s copyrights “on a massive scale.”

According to the letter, Google is violating the entertainment conglomerate’s intellectual property in multiple ways. The legal notice says Google has copied a “large corpus” of Disney’s works to train its gen AI models, which is believable, as Google’s image and video models will happily produce popular Disney characters—they couldn’t do that without feeding the models lots of Disney data.

The C&D also takes issue with Google for distributing “copies of its protected works” to consumers."

Has Cambridge-based AI music upstart Suno 'gone legit'?; WBUR, December 11, 2025

, WBUR ; Has Cambridge-based AI music upstart Suno 'gone legit'?

"The Cambridge-based AI music company Suno, which has been besieged by lawsuits from record labels, is now teaming up with behemoth label Warner Music. Under a new partnership, Warner will license music in its catalogue for use by Suno's AI.

Copyright law experts Peter Karol and Bhamati Viswanathan join WBUR's Morning Edition to discuss what the deal between Suno and Warner Music means for the future of intellectual property."

Disney Agrees to Bring Its Characters to OpenAI’s Sora Videos; The New York Times, December 11, 2025

 , The New York Times; Disney Agrees to Bring Its Characters to OpenAI’s Sora Videos

"In a watershed moment for Hollywood and generative artificial intelligence, Disney on Thursday announced an agreement to bring its characters to Sora, OpenAI’s short-form video platform. Videos made with Sora will be available to stream on Disney+ as part of the three-year deal...

“The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling,” Robert A. Iger, the chief executive of Disney, said in a statement.

Disney is the first major Hollywood company to cross this particular Rubicon."

‘Ruined my Christmas spirit’: McDonald’s removes AI-generated ad after backlash; Agence France-Presse via The Guardian, December 10, 2025

Agence France-Presse via The Guardian; "Ruined my Christmas spirit’: McDonald’s removes AI-generated ad after backlash

"Melanie Bridge, the chief executive of the Sweetshop Films, the company which made the ad, defended its use of AI in a post on LinkedIn.

“It’s never about replacing craft, it’s about expanding the toolbox. The vision, the taste, the leadership … that will always be human,” she said.

“And here’s the part people don’t see: the hours that went into this job far exceeded a traditional shoot. Ten people, five weeks, full-time.”

But that too sparked online debate.

Emlyn Davies, from the independent production company Bomper Studio, replied to the LinkedIn post: “What about the humans who would have been in it, the actors, the choir?

“Ten people on a project like this is a tiny amount compared to shooting it traditionally live action.”

Coca-Cola recently released its own AI-generated holiday ad, despite receiving backlash when it did the same last year.

The company’s new offering avoids close-ups of humans and mostly features AI-generated images of cute animals in a wintry setting."

Wednesday, December 10, 2025

EU investigates Google over AI-generated summaries in search results; BBC, December 8, 2025

 Liv McMahon , BBC; EU investigates Google over AI-generated summaries in search results

"The Commission's investigation comes down to whether Google has used the work of other people published online to build its own AI tools which it can profit from."

AI firms began to feel the legal wrath of copyright holders in 2025; NewScientist, December 10, 2025

Chris Stokel-Walker , NewScientist; AI firms began to feel the legal wrath of copyright holders in 2025

"The three years since the release of ChatGPT, OpenAI’s generative AI chatbot, have seen huge changes in every part of our lives. But one area that hasn’t changed – or at least, is still trying to maintain pre-AI norms – is the upholding of copyright law.

It is no secret that leading AI firms built their models by hoovering up data, including copyrighted material, from the internet without asking for permission first. This year, major copyright holders struck back, buffeting AI companies were with a range of lawsuits alleging copyright infringement."

Tuesday, December 9, 2025

A.I. Videos Have Flooded Social Media. No One Was Ready.; A.I. Videos Have Flooded Social Media. No One Was Ready., December 8, 2025

 Steven Lee Myers and , The New York Times ; A.I. Videos Have Flooded Social Media. No One Was Ready.

Apps like OpenAI’s Sora are fooling millions of users into thinking A.I. videos are real, even when they include warning labels.

"Videos like the fake interview above, created with OpenAI’s new app, Sora, show how easily public perceptions can be manipulated by tools that can produce an alternate reality with a series of simple prompts.

In the two months since Sora arrived, deceptive videos have surged on TikTok, X, YouTube, Facebook and Instagram, according to experts who track them. The deluge has raised alarm over a new generation of disinformation and fakes.

Most of the major social media companies have policies that require disclosure of artificial intelligence use and broadly prohibit content intended to deceive. But those guardrails have proved woefully inadequate for the kind of technological leaps OpenAI’s tools represent."

Film Studios, News Media and Even Competitor LexisNexis Among the Nine Amicus Briefs Supporting Thomson Reuters’ Copyright Case Against ROSS; LawSites, December 8, 2025

Bob Ambrogi , LawSites ; Film Studios, News Media and Even Competitor LexisNexis Among the Nine Amicus Briefs Supporting Thomson Reuters’ Copyright Case Against ROSS

"The long-running copyright litigation between Thomson Reuters and ROSS Intelligence is now pending in the 3rd U.S. Circuit Court of Appeals for an interlocutory appeal of the trial judge’s rulings in favor of TR. 

Recently here, I reported on the 10 amicus curiae briefs filed in support of ROSS, all arguing that the now-defunct AI legal research startup did not violate copyright law. 

Now, nine amicus briefs have been filed in support of TR. Those filing briefs range from major movie studios such as Disney and Paramount, to news media and copyright organizations, to individual copyright law professors, and even to TR’s principal competitor LexisNexis."

Monday, December 8, 2025

Public Domain Day 2026 Is Coming: Here’s What to Know; Copyright Lately, December 7, 2025

 Aaron Moss, Copyright Lately ; Public Domain Day 2026 Is Coming: Here’s What to Know

"Regular observers of copyright law’s favorite holiday know the drill: on January 1, 2026, a new crop of creative works from 1930 (along with sound recordings from 1925) will enter the public domain in the United States—ready to be remixed, recycled, or repurposed into B-grade horror films and ill-advised erotica.

Happy Public Domain Day 2026

This year’s film class is stacked with classics: Howard Hughes’s aviation epic Hell’s Angels(Jean Harlow’s screen debut and, at the time, the most expensive movie ever made); The Big Trail, featuring John Wayne in his first starring role; Greta Garbo’s first talkie, Anna Christie; Bing Crosby’s film debut in King of Jazz; and 1930 Best Picture winner All Quiet on the Western Front. There’s plenty of comedy too, including the Marx Brothers’ Animal Crackers,Laurel and Hardy’s Another Fine Mess, and Soup to Nuts, best remembered for featuring an early iteration of the Three Stooges."

Saturday, December 6, 2025

Library Agency Reinstates Grants Canceled by Trump Administration; The New York Times, December 5, 2025

 , The New York Times; Library Agency Reinstates Grants Canceled by Trump Administration


[Kip Currier: Restoration of Institute of Museum and Library Services (IMLS) grants for libraries, archives, and museums -- cut earlier this year by Trump 2.0 -- is good news for people throughout the country whose lives are enriched by these vital institutions and community anchors.

What does it say about an administration that eliminates support for libraries, archives, and museums that provide free access to thousands of books and summer reading programs, historical records and exhibits, and life-enhancing programs like job seeking and AI literacy, but which will pump millions and millions of dollars into the building of a White House ballroom that no one voted for and only the very wealthiest will ever have access to?]


[Excerpt]

"The federal agency that supports the nation’s libraries has restored thousands of grants canceled by the Trump administration, following a federal judge’s ruling that the executive order mandating the cuts was unlawful.

The executive order, issued in March, said the Institute for Museum and Library Services, along with six other small agencies, must “be reduced to the maximum extent consistent with the applicable law.” Soon after, the agency put most of its staff of 70 on administrative leave, fired its board members and began informing grant recipients that their federal funding had been eliminated.

In April, the attorneys general of 21 states filed a lawsuit arguing that the cuts, which included roughly $160 million in funding for state library agencies, violated federal law.

John J. McConnell Jr., the chief judge of the U.S. District Court for the District of Rhode Island, ruled in their favor on Nov. 21, calling the administration’s moves “arbitrary and capricious.” Canceling funding appropriated by Congress, he said, violated the doctrine of separation of powers.

This week, the agency announced the restoration of “all federal grants” in a terse post on its website. The post made no reference to the court ruling."

The New York Times sues Perplexity for producing ‘verbatim’ copies of its work; The Verge, December 5, 2025

Emma Roth, The Verge; The New York Times sues Perplexity for producing ‘verbatim’ copies of its work

"The New York Times has escalated its legal battle against the AI startup Perplexity, as it’s now suing the AI “answer engine” for allegedly producing and profiting from responses that are “verbatim or substantially similar copies” of the publication’s work.

The lawsuit, filed in a New York federal court on Friday, claims Perplexity “unlawfully crawls, scrapes, copies, and distributes” content from the NYT. It comes after the outlet’s repeated demands for Perplexity to stop using content from its website, as the NYT sent cease-and-desist notices to the AI startup last year and most recently in July, according to the lawsuit. The Chicago Tribune also filed a copyright lawsuit against Perplexity on Thursday."

Friday, December 5, 2025

The New York Times is suing Perplexity for copyright infringement; TechCrunch, December 5, 2025

Rebecca Bellan , TechCrunch; The New York Times is suing Perplexity for copyright infringement

"The New York Times filed suit Friday against AI search startup Perplexity for copyright infringement, its second lawsuit against an AI company. The Times joins several media outlets suing Perplexity, including the Chicago Tribune, which also filed suit this week."