Sunday, November 10, 2024

What’s Happening with AI and Copyright Law; JD Supra, November 4, 2024

AEON Law, JD Supra; What’s Happening with AI and Copyright Law

"Not surprisingly, a lot is happening at the intersection of artificial intelligence (AI) and intellectual property (IP) law.

Here’s a roundup of some recent developments in the area of copyright law and AI.

Copyright Office Denies AI Security Research Exemption under DMCA...

Former OpenAI Employee Says It Violates Copyright Law...

Blade Runner Production Company Sues Tesla for AI-Aided Copyright Infringement"

Saturday, November 9, 2024

OpenAI Gets a Win as Court Says No Harm Was Demonstrated in Copyright Case; Gizmodo, November 8, 2024

, Gizmodo; OpenAI Gets a Win as Court Says No Harm Was Demonstrated in Copyright Case

"OpenAI won an initial victory on Thursday in one of the many lawsuits the company is facing for its unlicensed use of copyrighted material to train generative AI products like ChatGPT.

A federal judge in the southern district of New York dismissed a complaint brought by the media outlets Raw Story and AlterNet, which claimed that OpenAI violated copyright law by purposefully removing what is known as copyright management information, such as article titles and author names, from material that it incorporated into its training datasets.

OpenAI had filed a motion to dismiss the case, arguing that the plaintiffs did not have standing to sue because they had not demonstrated a concrete harm to their businesses caused by the removal of the copyright management information. Judge Colleen McMahon agreed, dismissing the lawsuit but leaving the door open for the plaintiffs to file an amended complaint."

Thursday, November 7, 2024

‘I’m going to sue the living pants off them’: AI’s big legal showdown – and what it means for Dr Strange’s hair; The Guardian, November 6, 2024

  , The Guardian; ‘I’m going to sue the living pants off them’: AI’s big legal showdown – and what it means for Dr Strange’s hair

"“The intersection of generative AI and CGI image creation is the next wave.”

Now that wave is threatening to flood an unprepared industry, washing away jobs and certainties. How do people in the industry feel? To find out, I attended Trojan Horse Was a Unicorn (THU), a digital arts festival near Lisbon in Portugal. Now in its 10th year, THU is a place where young artists entering these industries, some 750 of them, come to meet, get inspired and learn from veterans in their fields: film-makers, animators, VFX wizards, concept artists, games designers. This year, AI is the elephant in the room. Everyone is either talking about it – or avoiding talking about it...

Andre Luis, the 43-year-old CEO and co-founder of THU, acknowledges that “the anxiety is here” at this year’s event, but rather than running away from it, he argues, artists should be embracing it. One of the problems now is that the people eagerly adopting AI are executives and managers. “They don’t understand how to use AI to accelerate creativity,” he says, “or to make things better for everyone, so it’s up to us [the artists] to teach them. You need people who actually are creative to use AI.”

Luis likens generative AI to ultra processed food: it cannot create anything new; it can only reconstitute what’s already there, turning it into an inferior product. “And a lot of companies are trying to make fast food,” he says. Many see AI as a way to churn out quick, cheap content, as opposed to higher quality fare that has been created “organically” over time, with loving human input...

The democratising potential of AI could usher in what Luis calls “a new era of indie” in films, games, TV. Just as digital technology put cameras, editing and graphics tools into the hands of many more people...

“AI is something that is here,” he tells the young creators at THU, “so you need to adapt. See the opportunities, see the problems, but understand that it can help you do things in a different way. You need to ask yourselves, ‘How can I be part of that?’"

Tuesday, November 5, 2024

The Heart of the Matter: Copyright, AI Training, and LLMs; SSRN, November 1, 2024

Daniel J. GervaisVanderbilt University - Law School

Noam ShemtovQueen Mary University of London, Centre for Commercial Law Studies

Haralambos MarmanisCopyright Clearance Center

Catherine Zaller RowlandCopyright Clearance Center 

SSRN; The Heart of the Matter: Copyright, AI Training, and LLMs



"Abstract

This article explores the intricate intersection of copyright law and large language models (LLMs), a cutting-edge artificial intelligence technology that has rapidly gained prominence. The authors provide a comprehensive analysis of the copyright implications arising from the training, fine-tuning, and use of LLMs, which often involve the ingestion of vast amounts of copyrighted material. The paper begins by elucidating the technical aspects of LLMs, including tokenization, word embeddings, and the various stages of LLM development. This technical foundation is crucial for understanding the subsequent legal analysis. The authors then delve into the copyright law aspects, examining potential infringement issues related to both inputs and outputs of LLMs. A comparative legal analysis is presented, focusing on the United States, European Union, United Kingdom, Japan, Singapore, and Switzerland. The article scrutinizes relevant copyright exceptions and limitations in these jurisdictions, including fair use in the US and text and data mining exceptions in the EU. The authors highlight the uncertainties and challenges in applying these legal concepts to LLMs, particularly in light of recent court decisions and legislative developments. The paper also addresses the potential impact of the EU's AI Act on copyright considerations, including its extraterritorial effects. Furthermore, it explores the concept of "making available" in the context of LLMs and its implications for copyright infringement. Recognizing the legal uncertainties and the need for a balanced approach that fosters both innovation and copyright protection, the authors propose licensing as a key solution. They advocate for a combination of direct and collective licensing models to provide a practical framework for the responsible use of copyrighted materials in AI systems.

This article offers valuable insights for legal scholars, policymakers, and industry professionals grappling with the copyright challenges posed by LLMs. It contributes to the ongoing dialogue on adapting copyright law to technological advancements while maintaining its fundamental purpose of incentivizing creativity and innovation."

Penguin Random House books now explicitly say ‘no’ to AI training; The Verge, October 18, 2024

 Emma Roth , The Verge; Penguin Random House books now explicitly say ‘no’ to AI training

"Book publisher Penguin Random House is putting its stance on AI training in print. The standard copyright page on both new and reprinted books will now say, “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems,” according to a report from The Bookseller spotted by Gizmodo. 

The clause also notes that Penguin Random House “expressly reserves this work from the text and data mining exception” in line with the European Union’s laws. The Bookseller says that Penguin Random House appears to be the first major publisher to account for AI on its copyright page. 

What gets printed on that page might be a warning shot, but it also has little to do with actual copyright law. The amended page is sort of like Penguin Random House’s version of a robots.txt file, which websites will sometimes use to ask AI companies and others not to scrape their content. But robots.txt isn’t a legal mechanism; it’s a voluntarily-adopted norm across the web. Copyright protections exist regardless of whether the copyright page is slipped into the front of the book, and fair use and other defenses (if applicable!) also exist even if the rights holder says they do not."

Monday, November 4, 2024

What AI knows about you; Axios, November 4, 2024

 Ina Friend, Axios; What AI knows about you

"Most AI builders don't say where they are getting the data they use to train their bots and models — but legally they're required to say what they are doing with their customers' data.

The big picture: These data-use disclosures open a window onto the otherwise opaque world of Big Tech's AI brain-food fight.

  • In this new Axios series, we'll tell you, company by company, what all the key players are saying and doing with your personal information and content.

Why it matters: You might be just fine knowing that picture you just posted on Instagram is helping train the next generative AI art engine. But you might not — or you might just want to be choosier about what you share.

Zoom out: AI makers need an incomprehensibly gigantic amount of raw data to train their large language and image models. 

  • The industry's hunger has led to a data land grab: Companies are vying to teach their baby AIs using information sucked in from many different sources — sometimes with the owner's permission, often without it — before new laws and court rulings make that harder. 

Zoom in: Each Big Tech giant is building generative AI models, and many of them are using their customer data, in part, to train them.

  • In some cases it's opt-in, meaning your data won't be used unless you agree to it. In other cases it is opt-out, meaning your information will automatically get used unless you explicitly say no. 
  • These rules can vary by region, thanks to legal differences. For instance, Meta's Facebook and Instagram are "opt-out" — but you can only opt out if you live in Europe or Brazil.
  • In the U.S., California's data privacy law is among the laws responsible for requiring firms to say what they do with user data. In the EU, it's the GDPR."

Sunday, November 3, 2024

An ‘Interview’ With a Dead Luminary Exposes the Pitfalls of A.I.; The New York Times, November 3, 2024

 , The New York Times; An ‘Interview’ With a Dead Luminary Exposes the Pitfalls of A.I.

"When a state-funded Polish radio station canceled a weekly show featuring interviews with theater directors and writers, the host of the program went quietly, resigned to media industry realities of cost-cutting and shifting tastes away from highbrow culture.

But his resignation turned to fury in late October after his former employer, Off Radio Krakow, aired what it billed as a “unique interview” with an icon of Polish culture, Wislawa Szymborska, the winner of the 1996 Nobel Prize for Literature.

The terminated radio host, Lukasz Zaleski, said he would have invited Ms. Szymborska on his morning show himself, but never did for a simple reason: She died in 2012.

The station used artificial intelligence to generate the recent interview — a dramatic and, to many, outrageous example of technology replacing humans, even dead ones."

Friday, November 1, 2024

AI Training Study to Come This Year, Copyright Office Says; Bloomberg Law, October 31, 2024

Annelise Gilbert , Bloomberg Law; AI Training Study to Come This Year, Copyright Office Says

"The Copyright Office’s report on the legal implications of training artificial intelligence models on copyrighted works is still expected to publish by the end of 2024, the office’s director told lawmakers.

Director Shira Perlmutter on Wednesday said the office aims to complete the remaining two sections of its three-part AI report in the next two months—one on the copyrightability of generative AI output and the other about liability, licensing, and fair use in regards to AI training on protected works."

Thursday, October 31, 2024

Thousands of published studies may contain images with incorrect copyright licences; Chemistry World, October 28, 2024

, Chemistry World ; Thousands of published studies may contain images with incorrect copyright licences

"More than 9000 studies published in open-access journals may contain figures published under the wrong copyright licence.

These open-access journals publish content under the CC-BY copyright licence, which means that anyone can copy, distribute or transmit that work including for commercial purposes as long as the original creator is credited. 

All the 9000+ studies contain figures created using the commercial scientific illustration service BioRender, which should technically mean that these are also available for free reuse. But that doesn’t appear to be the case.

When Simon Dürr, a computational enzymologist at the Swiss Federal Institute of Technology Lausanne in Switzerland, reached out to BioRender to ask if two figures produced using BioRender by the authors of both studies were free to reuse, he was told that they weren’t. The company said it would approach both journals and ask them to issue corrections.

Dürr runs an open-source, free-to-use competitor to BioRender called BioIcons and wanted to host figures produced using BioRender that were published in open access journals because he thought they would be free to use. According to Dürr, he followed up with BioRender near the end of 2023, flagging a total of 9277 academic papers published under the CC-BY copyright licence but never heard back on their copyright status. In total, Dürr says he found 12,059 papers if one includes other copyright licences that restrict commercial use or have other similar conditions."

Wednesday, October 30, 2024

A Harris Presidency Is the Only Way to Stay Ahead of A.I.; The New York Times, October 29, 2024

 THOMAS L. FRIEDMAN, The New York Times; A Harris Presidency Is the Only Way to Stay Ahead of A.I.

"Kamala Harris, given her background in law enforcement, connections to Silicon Valley and the work she has already done on A.I. in the past four years, is up to this challenge, which is a key reason she has my endorsement for the presidency...

I am writing a book that partly deals with this subject and have benefited from my tutorials with Craig Mundie, the former chief research and strategy officer for Microsoft who still advises the company. He is soon coming out with a book of his own related to the longer-term issues and opportunities of A.G.I., written with Eric Schmidt, the former Google C.E.O., and Henry Kissinger, who died last year and worked on the book right up to the end of his life.

It is titled “Genesis: Artificial Intelligence, Hope, and the Human Spirit.” The book invokes the Bible’s description of the origin of humanity because the authors believe that our A.I. moment is an equally fundamental turning point for our species.

I agree. We have become Godlike as a species in two ways: We are the first generation to intentionally create a computer with more intelligence than God endowed us with. And we are the first generation to unintentionally change the climate with our own hands.

The problem is we have become Godlike without any agreement among us on the Ten Commandments — on a shared value system that should guide the use of our newfound powers. We need to fix that fast. And no one is better positioned to lead that challenge than the next U.S. president, for several reasons."

Monday, October 28, 2024

THE COPYRIGHT CONTROVERSY BEHIND A VIRAL GOSPEL HIT; Christianity Today, October 28, 2024

, Christianity Today; THE COPYRIGHT CONTROVERSY BEHIND A VIRAL GOSPEL HIT

"Like any growing genre, Christian music’s increasing global popularity has placed a higher value on hits—and raised the stakes of proper attribution and credit. But in a Christian context, conflicts over credit and compensation can be especially fraught. The appearance of greed or opportunism can threaten a Christian artist’s reputation, but failure to claim credit threatens their livelihood—especially for independent musicians and producers or those working in smaller and developing industries like Ghana’s.

“Many dismiss the importance of legal considerations with statements like ‘Since it’s a God thing, it’s free and for everyone,’” said Eugene Zuta, a Ghanaian songwriter and worship leader. “As a result, copyright issues are often disregarded, and regulations are violated. Some of my songs have been used by others, who make light of their infringement with lame excuses.”"

Video game libraries lose legal appeal to emulate physical game collections online; Ars Technica, October 25, 2024

KYLE ORLAND, Ars Technica; Video game libraries lose legal appeal to emulate physical game collections online

"Earlier this year, we reported on the video game archivists asking for a legal DMCA exemption to share Internet-accessible emulated versions of their physical game collections with researchers. Today, the US Copyright Office announced once again that it was denying that request, forcing researchers to travel to far-flung collections for access to the often-rare physical copies of the games they're seeking.

In announcing its decision, the Register of Copyrights for the Library of Congresssided with the Entertainment Software Association and others who argued that the proposed remote access could serve as a legal loophole for a free-to-access "online arcade" that could harm the market for classic gaming re-releases. This argument resonated with the Copyright Office despite a VGHF study that found 87 percent of those older game titles are currently out of print."

Sunday, October 27, 2024

Public Knowledge, iFixit Free the McFlurry, Win Copyright Office DMCA Exemption for Ice Cream Machines; Public Knowledge, October 25, 2024

 Shiva Stella , Public Knowledge; Public Knowledge, iFixit Free the McFlurry, Win Copyright Office DMCA Exemption for Ice Cream Machines

"Today, the U.S. Copyright Office partially granted an exemption requested by Public Knowledge and iFixit to allow people to circumvent digital locks in order to repair commercial and industrial equipment. The Office did not grant the full scope of the requested exemption, but did grant an exemption specifically allowing for repair of retail-level food preparation equipment – including soft serve ice cream machines similar to those available at McDonald’s. The Copyright Office reviewed the request as part of its 1201 review process, which encourages advocates and public interest groups to present arguments for exemption to the Digital Millennium Copyright Act.

Section 1201 of the DMCA makes it illegal to bypass a digital lock that protects a copyrighted work, such as a device’s software, even when there is no copyright infringement. Every three years, the Copyright Office reviews exemption requests and issues recommendations to the Librarian of Congress on granting certain exceptions to Section 1201. The recommendations go into effect once approved by the Librarian of Congress."

Friday, October 25, 2024

Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools; The New York Times, October 24, 2024

 , The New York Times ; Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools

"President Biden on Thursday signed the first national security memorandum detailing how the Pentagon, the intelligence agencies and other national security institutions should use and protect artificial intelligence technology, putting “guardrails” on how such tools are employed in decisions varying from nuclear weapons to granting asylum.

The new document is the latest in a series Mr. Biden has issued grappling with the challenges of using A.I. tools to speed up government operations — whether detecting cyberattacks or predicting extreme weather — while limiting the most dystopian possibilities, including the development of autonomous weapons.

But most of the deadlines the order sets for agencies to conduct studies on applying or regulating the tools will go into full effect after Mr. Biden leaves office, leaving open the question of whether the next administration will abide by them...

The new guardrails would also prohibit letting artificial intelligence tools make a decision on granting asylum. And they would forbid tracking someone based on ethnicity or religion, or classifying someone as a “known terrorist” without a human weighing in.

Perhaps the most intriguing part of the order is that it treats private-sector advances in artificial intelligence as national assets that need to be protected from spying or theft by foreign adversaries, much as early nuclear weapons were. The order calls for intelligence agencies to begin protecting work on large language models or the chips used to power their development as national treasures, and to provide private-sector developers with up-to-the-minute intelligence to safeguard their inventions."

Wednesday, October 23, 2024

Former OpenAI Researcher Says the Company Broke Copyright Law; The New York Times, October 23, 2024

 , The New York Times; Former OpenAI Researcher Says the Company Broke Copyright Law

"Mr. Balaji believes the threats are more immediate. ChatGPT and other chatbots, he said, are destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train these A.I. systems.

“This is not a sustainable model for the internet ecosystem as a whole,” he told The Times."

Monday, October 21, 2024

Microsoft boss urges rethink of copyright laws for AI; The Times, October 21, 2024

Katie Prescott, The Times; Microsoft boss urges rethink of copyright laws for AI

"The boss of Microsoft has called for a rethink of copyright laws so that tech giants are able to train artificial intelligence models without risk of infringing intellectual property rights.

Satya Nadella, chief executive of the technology multinational, praised Japan’s more flexible copyright laws and said that governments need to develop a new legal framework to define “fair use” of material, which allows people in certain situations to use intellectual property without permission.

Nadella, 57, said governments needed to iron out the rules. “What are the bounds for copyright, which obviously have to be protected? What’s fair use?” he said. “For any society to move forward, you need to know what is fair use.”"

News Corp Sues AI Company Perplexity Over Copyright Claims, Made Up Text; The Hollywood Reporter, October 21, 2024

Caitlin Huston , The Hollywood Reporter; News Corp Sues AI Company Perplexity Over Copyright Claims, Made Up Text

"Dow Jones, the parent company to the Wall Street Journal, and the New York Post filed a lawsuit Monday against artificial intelligence company Perplexity, alleging that the company is illegally using copyrighted work.

The suit alleges that Perplexity, which is an AI research and conversational search engine, draws on articles and other copyrighted content from the publishers to feed into its product and then repackages the content in its responses, or sometimes uses the content verbatim, without linking back to the articles. The engine can also be used to display several paragraphs or entire articles, when asked."

‘Blade Runner 2049’ Producers Sue Elon Musk, Tesla and Warner Bros. Discovery, Alleging Copyright Infringement; Variety, October 21, 2024

Todd Spangler , Variety; ‘Blade Runner 2049’ Producers Sue Elon Musk, Tesla and Warner Bros. Discovery, Alleging Copyright Infringement

"Alcon Entertainment, the production company behind “Blade Runner 2049,” sued Tesla and CEO Elon Musk, as well as Warner Bros. Discovery, alleging that AI-generated images depicting scenes from the film used for the launch of Tesla’s self-driving Robotaxi represent copyright infringement.

In its lawsuit, filed Monday in L.A., Alcon said it had adamantly insisted that “Blade Runner 2049,” which stars Ryan Gosling and Harrison Ford, have no affiliation of any kind with “Tesla, X, Musk or any Musk-owned company,” given “Musk’s massively amplified, highly politicized, capricious and arbitrary behavior, which sometimes veers into hate speech.”"

Saturday, October 19, 2024

Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn’t Change That; Electronic Frontier Foundation (EFF), October 16, 2024

 CORYNNE MCSHERRY, Electronic Frontier Foundation (EFF); Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn’t Change That

"For more than a decade, giant standards development organizations (SDOs) have been fighting in courts around the country, trying use copyright law to control access to other laws. They claim that that they own the copyright in the text of some of the most important regulations in the country – the codes that protect product, building and environmental safety--and that they have the right to control access to those laws. And they keep losing because, it turns out, from New York, to Missouri, to the District of Columbia, judges understand that this is an absurd and undemocratic proposition. 

They suffered their latest defeat in Pennsylvania, where  a district court held that UpCodes, a company that has created a database of building codes – like the National Electrical Code--can include codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law. Some courts, including the Fifth Circuit Court of Appeals, have rejected that theory outright, holding that standards lose copyright protection when they are incorporated into law. Others, like the DC Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that whether or not the legal status of the standards changes once they are incorporated into law, posting them online is a lawful fair use. 

In this case, ASTM v. UpCodes, the court followed the latter path. Relying in large part on the DC Circuit’s decision, as well as an amicus brief EFF filed in support of UpCodes, the court held that providing access to the law (for free or subject to a subscription for “premium” access) was a lawful fair use. A key theme to the ruling is the public interest in accessing law:"

Friday, October 18, 2024

Mass shooting survivors turn to an unlikely place for justice – copyright law; The Guardian, October 18, 2024

  , The Guardian; Mass shooting survivors turn to an unlikely place for justice – copyright law

"In a Nashville courtroom in early July, survivors of the 2023 Covenant school shooting celebrated an unusual legal victory. Citing copyrightlaw, Judge l’Ashea Myles ruled that the assailant’s writings and other creative property could not be released to the public.

After months of hearings, the decision came down against conservative lawmakers, journalists and advocates who had sued for access to the writings, claiming officials had no right to keep them from the public. But since parents of the assailant – who killed six people at the private Christian elementary school, including three nine-year-old children – signed legal ownership of the shooter’s journals over to the families of surviving students last year, Myles said releasing the materials would violate the federal Copyright Act...

Keeping a shooter’s name or creative property – such as a manifesto or recording – out of the public eye does more than protect the emotional wellbeing of those impacted, experts say. It also helps to prevent future massacres.

That such material can serve as inspiration has been widely documented, explains Rachel Carroll Rivas of the Southern Poverty Law Center’s Intelligence Project. “Those videos just have an inherent dangerous factor, and they really shouldn’t be allowed to spread across the internet,” she said.

The danger stems from the fact that shooters, research has shown, often desire attention, recognition and notoriety."

It Sure Looks Like Trump Watches Are Breaking Copyright Law; Wired, October 18, 2024

Matt Giles, Wired; It Sure Looks Like Trump Watches Are Breaking Copyright Law

"According to the Associated Press, though, TheBestWatchesonEarth LLC advertised a product it can’t deliver, as that image is owned by the 178-year-old news agency. This week, the AP told WIRED it is pursuing a cease and desist against the LLC, which is registered in Sheridan, Wyoming. (The company did not reply to a request for comment about the cease and desist letter.)

Evan Vucci, the AP’s Pulitzer Prize–winning chief photographer, took that photograph, and while he told WIRED he does not own the rights to that image, the AP confirmed earlier this month in an email to WIRED that it is filing the written notice. “AP is proud of Evan Vucci’s photo and recognizes its impact,” wrote AP spokesperson Nicole Meir. “We reserve our rights to this powerful image, as we do with all AP journalism, and continue to license it for editorial use only.”"

Penguin Random House underscores copyright protection in AI rebuff; The Bookseller, October 18, 2024

 MATILDA BATTERSBY, The Bookseller; Penguin Random House underscores copyright protection in AI rebuff

"The world’s biggest trade publisher has changed the wording on its copyright pages to help protect authors’ intellectual property from being used to train large language models (LLMs) and other artificial intelligence (AI) tools, The Bookseller can exclusively reveal.

Penguin Random House (PRH) has amended its copyright wording across all imprints globally, confirming it will appear “in imprint pages across our markets”. The new wording states: “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems”, and will be included in all new titles and any backlist titles that are reprinted.

The statement also “expressly reserves [the titles] from the text and data mining exception”, in accordance with a European Parliament directive.

The move specifically to ban the use of its titles by AI firms for the development of chatbots and other digital tools comes amid a slew of copyright infringement cases in the US and reports that large tranches of pirated books have already been used by tech companies to train AI tools. In 2024, several academic publishers including Taylor & Francis, Wiley and Sage have announced partnerships to license content to AI firms.

PRH is believed to be the first of the Big Five anglophone trade publishers to amend its copyright information to reflect the acceleration of AI systems and the alleged reliance by tech companies on using published work to train language models."

Thursday, October 17, 2024

Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill?; The Guardian, October 16, 2024

 Garrison Lovely, The Guardian; Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill? 

"I’m writing a book on the economics and politics of AI and have analyzed years of nationwide polling on the topic. The findings are pretty consistent: people worry about risks from AI, favor regulations, and don’t trust companies to police themselves. Incredibly, these findings tend to hold true for both Republicans and Democrats.

So why would Newsom buck the popular bill?

Well, the bill was fiercely resisted by most of the AI industry, including GoogleMeta and OpenAI. The US has let the industry self-regulate, and these companies desperately don’t want that to change – whatever sounds their leaders make to the contrary...

The top three names on the congressional letter – Zoe Lofgren, Anna Eshoo, and Ro Khanna – have collectively taken more than $4m in political contributions from the industry, accounting for nearly half of their lifetime top-20 contributors. Google was their biggest donor by far, with nearly $1m in total.

The death knell probably came from the former House speaker Nancy Pelosi, who published her own statement against the bill, citing the congressional letter and Li’s Fortune op-ed.

In 2021, reporters discovered that Lofgren’s daughter is a lawyer for Google, which prompted a watchdog to ask Pelosi to negotiate her recusal from antitrust oversight roles.

Who came to Lofgren’s defense? Eshoo and Khanna.

Three years later, Lofgren remains in these roles, which have helped her block efforts to rein in big tech – against the will of even her Silicon Valley constituents.

Pelosi’s 2023 financial disclosure shows that her husband owned between $16m and $80m in stocks and options in Amazon, Google, Microsoft and Nvidia...

Sunny Gandhi of the youth tech advocacy group Encode Justice, which co-sponsored the bill, told me: “When you tell the average person that tech giants are creating the most powerful tools in human history but resist simple measures to prevent catastrophic harm, their reaction isn’t just disbelief – it’s outrage. This isn’t just a policy disagreement; it’s a moral chasm between Silicon Valley and Main Street.”

Newsom just told us which of these he values more."