Sunday, October 5, 2025

OpenAI hastily retreats from gung-ho copyright policy after embarrassing Sora video output like AI Sam Altman surrounded by Pokémon saying 'I hope Nintendo doesn't sue us'; PC Gamer, October 5, 2025

, PC Gamer ; OpenAI hastily retreats from gung-ho copyright policy after embarrassing Sora video output like AI Sam Altman surrounded by Pokémon saying 'I hope Nintendo doesn't sue us'

"This video is just one of many examples, but you'll have a much harder time finding Sora-generated videos containing Marvel or Disney characters. As reported by Automaton, Sora appears to be refusing prompts containing references to American IP, but Japanese IP didn't seem to be getting the same treatment over the past week.

Japanese lawyer and House of Representatives member Akihisa Shiozaki called for action to protect creatives in a post on X (formerly Twitter), which has been translated by Automaton: "I’ve tried out [Sora 2] myself, but I felt that it poses a serious legal and political problem. We need to take immediate action if we want to protect leading Japanese creators and the domestic content industry, and help them further develop. (I wonder why Disney and Marvel characters can’t be displayed).""

America goes gambling; Quartz, October 5, 2025

 Jackie Snow, Quartz; America goes gambling


[Kip Currier: This Quartz article America Goes Gambling is a timely one about a significant AI-driven development: massive growth in online gambling, sports betting, and gambling addictions after the U.S. Supreme Court struck down a de facto ban on these activities (outside of Nevada and tribal casinos) in 2018's Murphy v. NCAA decision.

I spoke on the issue of AI-enhanced online gambling and sports betting at the September 2025 Faithful Futures: Guiding AI with Wisdom and Witness conference in Minneapolis and am currently finishing a chapter for publication on this emerging topic.]


[Excerpt]

"On any given Sunday this football season, Americans are placing millions in legal sports bets, a level of widespread wagering that would have been almost impossible a decade ago when only Nevada offered legal sportsbooks.

Today's football slate represents the peak of a sports betting boom that has fundamentally altered how Americans watch games. Sunday's action is part of an industry that's grown from $4.9 billion in total annual wagers in 2017 to almost $150 billion in 2024. But beneath the Sunday spectacle lies a growing concern about addiction specialists reporting record demand for gambling help as the line between sports entertainment and financial risk becomes increasingly blurred.

The transformation has been swift and dramatic. When the Supreme Court struck down the federal sports betting ban in Murphy v. NCAA in 2018, legal sports betting was confined to Nevada and tribal casinos. Today, legal sports betting operates in 39 states and Washington, D.C., with more statehouses considering laws that would greenlight it."

Fraud, AI slop and huge profits: is science publishing broken? – podcast; The Guardian, October 2, 2025

 Presented and produced by with Sound design by , the executive producer was ; Fraud, AI slop and huge profits: is science publishing broken? – podcast

"Scientists are warning that academic publishing needs urgent reform in order to retain trust in the research system. Ian Sample tells Madeleine Finlay what has gone so wrong, and Dr Mark Hanson of the University of Exeter proposes some potential solutions

Quality of scientific papers questioned as academics ‘overwhelmed’ by the millions published

Is the staggeringly profitable business of scientific publishing bad for science?

This podcast was amended on 2 October 2025 to include information about how AI is being used to spot low quality papers."

The Normalization of Book Banning; PEN America, October 1, 2025

  Sabrina BaêtaTasslyn Magnusson, Madison Markham, Kasey Meehan, Yuliana Tamayo Latorre, PEN America; The Normalization of Book Banning"

"Introduction


In 2025, book censorship in the United States is rampant and common. Never before in the life of any living American have so many books been systematically removed from school libraries across the country. Never before have so many states passed laws or regulations to facilitate the banning of books, including bans on specific titles statewide. Never before have so many politicians sought to bully school leaders into censoring according to their ideological preferences, even threatening public funding to exact compliance. Never before has access to so many stories been stolen from so many children.


The book bans that have accumulated in the past four years are unprecedented and undeniable. This report looks back at the 2024-2025 school year – the fourth school year in the contemporary campaign to ban books – and illustrates the continued attacks on books, stories, identities, and histories.  


This report offers a window into the complex and extensive climate of censorship between July 1, 2024 through June 30, 2025. Our reporting on book bans remains a bellwether of a larger campaign to restrict and control education and public narratives, wreaking havoc on our public schools and democracy."

Saturday, October 4, 2025

‘Orwell: 2+2=5’ Review: How George Came to See the World as Orwellian; The New York Times, October 2, 2025

  , The New York Times; ‘Orwell: 2+2=5’ Review: How George Came to See the World as Orwellian

"“The opinion that art should have nothing to do with politics is itself a political attitude,” George Orwell wrote in 1946, a year after the end of the World War II. That line appears early in “Orwell: 2 + 2 = 5,” an essayistic documentary from Raoul Peck that surveys its title subject’s life and work, using them as a lens to explore authoritarian power in the past and the present. Densely packed, the movie is a whirlwind of ideas and images, by turns heady, enlivening, disturbing and near-exhausting. It’s a work of visceral urgency from Peck, who’s best known for his 2017 documentary “I Am Not Your Negro,” about James Baldwin.

Peck plucked that observation about art and politics from Orwell’s essential 1946 essay “Why I Write,” in which he lists “four great motives for writing” — especially for writing prose and, of course, aside from earning a living — including “political purpose.” Near the end of the essay, Orwell writes that he hopes to start a new book. What soon followed was “Nineteen Eighty-Four,” the seismic novel that helped turn his name into an adjective. Anchored by Orwell’s writing — and Damian Lewis’s calm, intimate voice-over — Peck charts the writer’s life in tandem with world-shattering events, focusing on when he was working on “Nineteen Eighty-Four,” which was published in 1949. Months later, Orwell was dead."

Sam Altman says Sora will add ‘granular,’ opt-in copyright controls; TechCrunch, October 4, 2025

Anthony Ha , TechCrunch; Sam Altman says Sora will add ‘granular,’ opt-in copyright controls

"OpenAI may be reversing course on how it approaches copyright and intellectual property in its new video app Sora.

Prior to Sora’s launch this week, The Wall Street Journal reported that OpenAI had been telling Hollywood studios and agencies that they needed to explicitly opt out if they didn’t want their IP to be included in Sora-generated videos.

Despite being invite-only, the app quickly climbed to the top of the App Store charts. Sora’s most distinctive feature may be its “cameos,” where users can upload their biometric data to see their digital likeness featured in AI-generated videos.

At the same time, users also seem to delight in flouting copyright laws by creating videos with popular, studio-owned characters. In some cases, those characters might even criticize the company’s approach to copyright, for example in videos where Pikachu and SpongeBob interact with deepfakes of OpenAI CEO Sam Altman.

In a blog post published Friday, Altman said the company is already planning two changes to Sora, first by giving copyright holders “more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls.”"

Thursday, October 2, 2025

Harvard Professors May Be Eligible for Payments in $1.5 Billion AI Copyright Settlement; The Harvard Crimson, October 1, 2025

Victoria D. Rengel, The Harvard Crimson;  Harvard Professors May Be Eligible for Payments in $1.5 Billion AI Copyright Settlement

"Following mediation, the plaintiffs and defendants filed a motion for the preliminary approval of a settlement on Sept. 5, which included an agreement from Anthropic that it will destroy its pirated databases and pay $1.5 billion in damages to a group of authors and publishers.

On Sept. 25, a California federal judge granted preliminary approval for a settlement, the largest in the history of copyright cases in the U.S.

Each member of the class will receive a payment of approximately $3,000 per pirated work.

Authors whose works are in the databases are not notified separately, but instead must submit their contact information to receive a formal notice of the class action — meaning a number of authors, including many Harvard professors, may be unaware that their works were pirated by Anthropic.

Lynch said Anthropic’s nonconsensual use of her work undermines the purpose behind why she, and other scholars, write and publish their work.

“All of us at Harvard publish, but we thought when we were publishing that we are doing that — to communicate to other human beings,” she said. “Not to be fed into this mill.”"

Wednesday, October 1, 2025

The Rainmaker: What 20 Years In Supreme Court Practice Have Taught Me; Above The Law, September 30, 2025

 Neal Katyal  , Above The Law; The Rainmaker: What 20 Years In Supreme Court Practice Have Taught Me

"The Improv Principle

Ed. note: The Rainmaker is a new Above the Law series highlighting attorneys who have built distinguished practices by excelling not only in the courtroom and at the negotiating table, but also in business development, mentorship, and leadership. Each installment will feature candid reflections on what it takes to succeed as a rainmaker in today’s legal industry. Our first featured rainmaker is Neal Katyal...

For years, I’ve been studying improv comedy, and it’s transformed how I think about legal practice. The cardinal rule of improv is “yes, and”—you accept what your scene partners offer and build on it. You don’t say “no” or shut down their contribution. You make your partners look good, and in turn they make you look good.

This sounds soft. It’s not. It’s the hardest discipline I know.

In a meeting, when an associate offers an idea that seems off-base, the instinct is to correct them, to show why you’re the experienced lead counsel. The improv instinct is different: find what’s valuable in their contribution and build on it. “Yes, and we could take that framework and apply it to the jurisdictional question.” Suddenly, the associate isn’t embarrassed—they’re energized. They’ve contributed something real. They’ll work twice as hard for you, and next time, their idea might be the one that wins the case. 

This isn’t artificial, it’s definitely not about giving false praise.  A smart associate, after all, will see through that in a second.  It’s rather about trying to find the diamond in the rough, the insight that the associate has and that can be built upon. I kind of stumbled upon that idea when I did my first case, challenging Guantanamo. At my side were a dozen law students – and they would all have various writing assignments and my duty was to sort through all their insights and build a coherent product out of it. Many were off-the-wall, to be sure, but many were brilliant, too. It just took work to find those flashes of brilliance and to build upon them. That kind of “bottom-up” strategy is one I have taken to heart – so much so that today I routinely take advice on crafting arguments from my Researcher at Milbank. My Researcher is someone who has graduated from college and yet has not attended law school.

This isn’t just about associates or your internal team, it’s just as much about clients. When a client pushes back on your strategy, you could dig in and explain why you’re right. Or you could listen—really listen—to what’s driving their concern. Usually, they’re telling you something important about their business reality, their risk tolerance, or their board dynamics. “Yes, and given that constraint, what if we structured the argument this way?” Now you’re not just their lawyer; you’re their partner.

Why Clients Return

Twenty-three years ago when I wrote that piece, I thought clients hired you for your legal brilliance. They don’t. They hire you because you make their problems smaller, not bigger.

I’ve represented the same clients through multiple Supreme Court cases, not because I won every time (I haven’t), but because they trust that I’ll listen to what they actually need. Sometimes what they need is an aggressive cert petition. Sometimes what they need is someone to tell them that the case isn’t worth the institutional risk of taking to the Court. The clients who keep coming back are the ones who know you’ll give them the second answer when it’s true, even though it costs you a major case and significant fees.

This requires a specific kind of humility: the humility to know that the client understands their business better than you do, and that your legal judgment is in service of their goals, not the other way around. Supreme Court lawyers can struggle with this because we’re trained to think about doctrinal purity and legal architecture. But clients don’t care about your elegant theory of administrative law. They care about whether they can build the project, launch the product, or avoid the devastating liability.

The best piece of advice I ever received came from Eric Holder, who mentored me at the Justice Department in my first stint there, right after my clerkships. He watched me fail to persuade senior officials of a position that I was absolutely certain was right. Afterward, he pulled me aside. “Your analysis was perfect,” he said. “But you didn’t listen to their concerns. You tried to convince them you were right instead of understanding why they were worried. Next time, start by understanding their perspective.”

That lesson echoes through every client relationship, every oral argument, every brief. Start by understanding their perspective."

Disney Sends Cease And Desist Letter To Character.ai For Copyright Infringement As Studios Move To Protect IP; Deadline, September 30, 2025

Jill Goldsmith, Deadline; Disney Sends Cease And Desist Letter To Character.ai For Copyright Infringement As Studios Move To Protect IP

"Walt Disney sent a cease-and-desist letter to Character.AI, a “personalized superintelligence platform” that the media giant says is ripping off copyrighted characters without authorization.

The AI startup offers users the ability to create customizable, personalized AI companions that can be totally original but in some cases are inspired by existing characters, including, it seems, Disney icons from Spider-Man and Darth Vader to Moana and Elsa.

The letter is the latest legal salvo by Hollywood as studios begin to step up against AI. Disney has also sued AI company Midjourney for allegedly improper use and distribution of AI-generated characters from Disney films. Disney, Warner Bros. and Universal Pictures this month sued Chinese AI firm MiniMax for copyright infringement."

Tuesday, September 30, 2025

The fanfiction written on a notes app that’s become a bestseller – with a seven-figure film deal; The Guardian, September 29, 2025

The Guardian; The fanfiction written on a notes app that’s become a bestseller – with a seven-figure film deal

"How does a first-time novelist get out of the starting blocks quite like that? The thing is, the author behind the doorstopper dark fantasy novel, Alchemised, is no unknown debut: SenLinYu, 34, started off writing Harry Potter fanfiction that blew up online during the pandemic, racking up more than 20m downloads. Sen’s Draco and Hermione (“Dramione”) fanfic, heavily inspired by The Handmaid’s Tale, has now been rewritten – with third-party IP necessarily removed – and published traditionally as Alchemised. But if you didn’t know about Alchemised’s origins, you would be unlikely to clock them: even squinting, it’s hard to see any trace of Harry Potter in the revamped version, set in a different world and magic system...

Sen began rewriting Manacled around Christmas 2022, and as they were finishing up, they were approached by a literary agency. Stripping away the Harry Potter and overt Handmaid’s Tale references was not “in any way pleasant”, or something they would ever want to do again. “I would rather be shot,” Sen told their Tumblr followers last year. They described the process as taking two 5,000-piece jigsaw puzzles and “having to use both to build a new puzzle that somehow makes sense. Do not recommend. I also had to scrap and build from scratch an entirely new healing and medicine system and it was literally traumatic.”"

OpenAI's new Sora video generator to require copyright holders to opt out, WSJ reports; Reuters, September 29, 2025

Reuters; OpenAI's new Sora video generator to require copyright holders to opt out, WSJ reports

"OpenAI is planning to release a new version of its Sora generator that creates videos featuring copyrighted material, unless rights holders opt out of having their work appear, the Wall Street Journal reported on Monday, citing people familiar with the matter.

The artificial intelligence startup began notifying talent agencies and studios over the past week about the opt-out process and the product, which it plans to release in the coming days, the report said.

The new process would mean movie studios and other intellectual property owners would have to explicitly ask OpenAI not to include their copyrighted material in videos Sora creates, according to the report."

Monday, September 29, 2025

Former Penn Carey Law adjunct professor John Squires named director of U.S. Patent and Trademark Office; The Daily Pennsylvanian, September 29, 2025

Matthew Quitoriano , The Daily Pennsylvanian; Former Penn Carey Law adjunct professor John Squires named director of U.S. Patent and Trademark Office

"John Squires, a former adjunct professor at the University of Pennsylvania Carey Law School, was named the next director of the United States Patent and Trademark Office.

Squires will serve as the Under Secretary of Commerce for Intellectual Property and advise 1968 Wharton graduate and President Donald Trump and the Secretary of Commerce Howard Lutnick on intellectual property policy. In the Sept. 22 announcement, Squires wrote that the opportunity to lead a large and influential office was “both humbling and the honor of a lifetime.”...

Squires served as an adjunct professor for Penn Carey Law's L.L.M. program, where he helped lawyers trained outside the country learn about law in the United States.

The director of the USPTO is appointed by the president with the consent of the Senate...

Squires received a Bachelor of Science in chemistry from Bucknell University and received his J.D. from the University of Pittsburgh School of Law. 

Squires previously served as the chief intellectual property counsel at Honeywell and The Goldman Sachs Group, and has held intellectual property roles at Perkins Coie and Chadbourne and Parker. Prior to his secretarial appointment, Squires was the chair of Emerging Companies and Intellectual Property at Dilworth Paxson."

In the fight over AI, copyright is America’s competitive weapon; The Hill, September 29, 2025

In the fight over AI, copyright is America’s competitive weapon, The Hill; In the fight over AI, copyright is America’s competitive weapon


"On Monday, a new $100 million super-PAC network, Leading the Future, was introduced to shape artificial intelligence policy in next year’s elections. The group says it will fight for sensible guardrails on AI while pushing back against efforts it believes could slow AI development.

But if Leading the Future is to live up to its name, it must avoid an easy trap: framing copyright protections as an obstacle to American AI competitiveness."

I Sued Anthropic, and the Unthinkable Happened; The New York Times, September 29, 2025

 , The New York Times; I Sued Anthropic, and the Unthinkable Happened

"In August 2024, I became one of three named plaintiffs leading a class-action lawsuit against the A.I. company Anthropic for pirating my books and hundreds of thousands of other books to train its A.I. The fight felt daunting, almost preposterous: me — a queer, female thriller writer — versus a company now worth $183 billion?

Thanks to the relentless work of everyone on my legal team, the unthinkable happened: Anthropic agreed to pay authors and publishers $1.5 billion in the largest copyright settlement in history. A federal judge preliminarily approved the agreement last week.

This settlement sends a clear message to the Big Tech companies splashing generative A.I. over every app and page and program: You are not above the law. And it should signal to consumers everywhere that A.I. isn’t an unstoppable tsunami about to overwhelm us. Now is the time for ordinary Americans to recognize our agency and act to put in place the guardrails we want.

The settlement isn’t perfect. It’s absurd that it took an army of lawyers to demonstrate what any 10-year-old knows is true: Thou shalt not steal. At around $3,000 per work, shared by the author and publisher, the damages are far from life-changing (and, some argue, a slap on the wrist for a company flush with cash). I also disagree with the judge’s ruling that, had Anthropic acquired the books legally, training its chatbot on them would have been “fair use.” I write my novels to engage human minds — not to empower an algorithm to mimic my voice and spit out commodity knockoffs to compete directly against my originals in the marketplace, nor to make that algorithm’s creators unfathomably wealthy and powerful.

But as my fellow plaintiff Kirk Wallace Johnson put it, this is “the beginning of a fight on behalf of humans that don’t believe we have to sacrifice everything on the altar of A.I.” Anthropic will destroy its trove of illegally downloaded books; its competitors should take heed to get out of the business of piracy as well. Dozens of A.I. copyright lawsuits have been filed against OpenAI, Microsoft and other companies, led in part by Sylvia Day, Jonathan Franzen, David Baldacci, John Grisham, Stacy Schiff and George R. R. Martin. (The New York Times has also brought a suit against OpenAI and Microsoft.)

Though a settlement isn’t legal precedent, Bartz v. Anthropic may serve as a test case for other A.I. lawsuits, the first domino to fall in an industry whose “move fast, break things” modus operandi led to large-scale theft. Among the plaintiffs of other cases are voice actors, visual artists, record labels, YouTubers, media companies and stock-photo libraries, diverse stakeholders who’ve watched Big Tech encroach on their territory with little regard for copyright law...

Now the book publishing industry has sent a message to all A.I. companies: Our intellectual property isn’t yours for the taking, and you cannot act with impunity. This settlement is an opening gambit in a critical battle that will be waged for years to come."

Sunday, September 28, 2025

Why I gave the world wide web away for free; The Guardian, September 28, 2025

 , The Guardian ; Why I gave the world wide web away for free

"Sharing your information in a smart way can also liberate it. Why is your smartwatch writing your biological data to one silo in one format? Why is your credit card writing your financial data to a second silo in a different format? Why are your YouTube comments, Reddit posts, Facebook updates and tweets all stored in different places? Why is the default expectation that you aren’t supposed to be able to look at any of this stuff? You generate all this data – your actions, your choices, your body, your preferences, your decisions. You should own it. You should be empowered by it.

Somewhere between my original vision for web 1.0 and the rise of social media as part of web 2.0, we took the wrong path. We’re now at a new crossroads, one where we must decide if AI will be used for the betterment or to the detriment of society. How can we learn from the mistakes of the past? First of all, we must ensure policymakers do not end up playing the same decade-long game of catchup they have done over social media. The time to decide the governance model for AI was yesterday, so we must act with urgency.

In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.

So how do we move forward? Part of the frustration with democracy in the 21st century is that governments have been too slow to meet the demands of digital citizens. The AI industry landscape is fiercely competitive, and development and governance are dictated by companies. The lesson from social media is that this will not create value for the individual.

I coded the world wide web on a single computer in a small room. But that small room didn’t belong to me, it was at Cern. Cern was created in the aftermath of the second world war by the UN and European governments who identified a historic, scientific turning point that required international collaboration. It is hard to imagine a big tech company agreeing to share the world wide web for no commercial reward like Cern allowed me to. That’s why we need a Cern-like not-for-profit body driving forward international AI research.

I gave the world wide web away for free because I thought that it would only work if it worked for everyone. Today, I believe that to be truer than ever. Regulation and global governance are technically feasible, but reliant on political willpower. If we are able to muster it, we have the chance to restore the web as a tool for collaboration, creativity and compassion across cultural borders. We can re-empower individuals, and take the web back. It’s not too late."

Education report calling for ethical AI use contains over 15 fake sources; Ars Technica, September 12, 2025

 BENJ EDWARDS, Ars Technica ; Education report calling for ethical AI use contains over 15 fake sources

"On Friday, CBC News reported that a major education reform document prepared for the Canadian province of Newfoundland and Labrador contains at least 15 fabricated citations that academics suspect were generated by an AI language model—despite the same report calling for "ethical" AI use in schools.

"A Vision for the Future: Transforming and Modernizing Education," released August 28, serves as a 10-year roadmap for modernizing the province's public schools and post-secondary institutions. The 418-page document took 18 months to complete and was unveiled by co-chairs Anne Burke and Karen Goodnough, both professors at Memorial University's Faculty of Education, alongside Education Minister Bernard Davis...

The irony runs deep

The presence of potentially AI-generated fake citations becomes especially awkward given that one of the report's 110 recommendations specifically states the provincial government should "provide learners and educators with essential AI knowledge, including ethics, data privacy, and responsible technology use."

Sarah Martin, a Memorial political science professor who spent days reviewing the document, discovered multiple fabricated citations. "Around the references I cannot find, I can't imagine another explanation," she told CBC. "You're like, 'This has to be right, this can't not be.' This is a citation in a very important document for educational policy.""

Hastings Center Releases Medical AI Ethics Tool for Policymakers, Patients, and Providers; The Hastings Center for Bioethics, September 25, 2025

The Hastings Center for Bioethics; Hastings Center Releases Medical AI Ethics Tool for Policymakers, Patients, and Providers

"As artificial intelligence rapidly transforms healthcare, The Hastings Center for Bioethics has released an interactive tool to help policymakers, patients and providers understand the ways that AI is being used in medicine—from making a diagnosis to evaluating insurance claims—and navigate the ethical questions that emerge along the way.

The new tool, a Patient’s Journey with Medical AI, follows an imaginary patient through five interactions with medical AI. It guides users through critical decision points in diagnostics, treatment, and communication, offering personalized insights into how algorithms might influence their care. 

Each decision point in the Patient’s Journey includes a summary of the ethical issues raised and multiple choice questions intended to stimulate thinking and discussion about particular uses of AI in medicine. Policy experts from across the political spectrum were invited to review the tool for accuracy and utility.

The Patient’s Journey is the latest in a set of resources developed through Hastings on the Hill, a project that translates bioethics research for use by policymakers—with an initial focus on medical AI. “This isn’t just about what AI can do — it’s about what it should do,” said Hastings Center President Vardit Ravitsky, who directs Hastings on the Hill. “Patients deserve to understand how technologies affect their health decisions, and policymakers can benefit from expert guidance as they seek to ensure that AI serves the public good.”

The Greenwall Foundation is supporting this initiative. Additional support comes from The Donaghue Foundation and the National Institutes of Health’s Bridge2AI initiative.

In addition to using Hastings on the Hill resources, policymakers, industry leaders, and others who shape medical AI policy and practice are invited to contact The Hastings Center with questions related to ethical issues they are encountering. Hastings Center scholars and fellows can provide expert nonpartisan analysis on urgent bioethics issues, such as algorithmic bias, patient privacy, data governance, and informed consent.

“Ethics should not be an afterthought,” says Ravitsky. “Concerns about biased health algorithms and opaque clinical decision tools have underscored the need for ethical oversight alongside technical innovation.”

“The speed of AI development has outpaced the ethical guardrails we need,” said Erin Williams, President and CEO of EDW Wisdom, LLC — the consultancy working with The Hastings Center. “Our role is to bridge that gap —ensuring that human dignity, equity, and trust are not casualties of technological progress.”

Explore Patient’s Journey with Medical AI. Learn more about Hastings on the Hill."

Morgan & Morgan takes Disney to court over rights to feature ‘Steamboat Willie’ in law firm ads; News6, September 17, 2025

 Phil Landeros , News6; Morgan & Morgan takes Disney to court over rights to feature ‘Steamboat Willie’ in law firm ads

"Morgan & Morgan, Florida’s largest law firm, has filed a lawsuit asking a judge to declare the law firm’s planned use of the iconic film in an ad is protected against trademark claims from Disney. Steamboat Willie entered the public domain on Jan. 1, 2024, when Disney’s copyright protection expired.

The proposed ad depicts Mickey Mouse in a boat collision with a car, after which the driver seeks legal representation from Morgan & Morgan. According to the filing, before moving forward with the advertisement, the law firm sought assurance from Disney that the commercial wouldn’t trigger legal action. The filing said Disney would not comply."

Trump administration wants cut of universities’ patent revenue: Lutnick; The Hill, September 10, 2025

LEXI LONAS COCHRAN , The Hill; Trump administration wants cut of universities’ patent revenue: Lutnick

"Commerce Secretary Howard Lutnick indicated the Trump administration is looking to take a cut of the revenue generated by university patents developed through federally funded research.

Lutnick in an interview with Axios published Wednesday discussed taking a portion of revenue from the patents, arguing it is unfair for the government to give universities the money with no finanical return...

The original purpose of universities maintaining full ownership of patents was to incentivize the quick development of new technologies. And while most universities seek to commercialize patents, they generally would make more money by writing grants, according to a 2024 study cited by Axios."

Countries Consider A.I.’s Dangers and Benefits at U.N.; The New York Times, September 25, 2025

  , The New York Times; Countries Consider A.I.’s Dangers and Benefits at U.N.

"The United Nations on Thursday announced a plan to establish itself as the leading global forum to guide the path and pace of artificial intelligence, a major foray into the raging debate over the future of the rapidly changing technology.

As part of its General Assembly this week, the organization said it was implementing a “global dialogue on artificial intelligence governance,” to assemble ideas and best practices on A.I. governance. The U.N. also said it would form a 40-member panel of scientific experts to synthesize and analyze the research on A.I. risks and opportunities, in the vein of previous similar efforts by the body on climate change and nuclear policy.

To begin the initiative, dozens of U.N. member nations — and a few tech companies, academics and nonprofits — spent a portion of Thursday summarizing their hopes and concerns about A.I."