Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Thursday, August 28, 2025

Anthropic’s surprise settlement adds new wrinkle in AI copyright war; Reuters, August 27, 2025

 , Reuters; Anthropic’s surprise settlement adds new wrinkle in AI copyright war

"Anthropic's class action settlement with a group of U.S. authors this week was a first, but legal experts said the case's distinct qualities complicate the deal's potential influence on a wave of ongoing copyright lawsuits against other artificial-intelligence focused companies like OpenAI, Microsoft and Meta Platforms.

Amazon-backed Anthropic was under particular pressure, with a trial looming in December after a judge found it liable for pirating millions of copyrighted books. The terms of the settlement, which require a judge's approval, are not yet public. And U.S. courts have just begun to wrestle with novel copyright questions related to generative AI, which could prompt other defendants to hold out for favorable rulings."

Monday, August 25, 2025

How ChatGPT Surprised Me; The New York Times, August 24, 2025

, The New York Times ; How ChatGPT Surprised Me

"In some corners of the internet — I’m looking at you, Bluesky — it’s become gauche to react to A.I. with anything save dismissiveness or anger. The anger I understand, and parts of it I share. I am not comfortable with these companies becoming astonishingly rich off the entire available body of human knowledge. Yes, we all build on what came before us. No company founded today is free of debt to the inventors and innovators who preceded it. But there is something different about inhaling the existing corpus of human knowledge, algorithmically transforming it into predictive text generation and selling it back to us. (I should note that The New York Times is suing OpenAI and its partner Microsoft for copyright infringement, claims both companies have denied.)

Right now, the A.I. companies are not making all that much money off these products. If they eventually do make the profits their investors and founders imagine, I don’t think the normal tax structure is sufficient to cover the debt they owe all of us, and everyone before us, on whose writing and ideas their models are built...

As the now-cliché line goes, this is the worst A.I. will ever be, and this is the fewest number of users it will have. The dependence of humans on artificial intelligence will only grow, with unknowable consequences both for human society and for individual human beings. What will constant access to these systems mean for the personalities of the first generation to use them starting in childhood? We truly have no idea. My children are in that generation, and the experiment we are about to run on them scares me."

Saturday, August 23, 2025

Watering down Australia’s AI copyright laws would sacrifice writers’ livelihoods to ‘brogrammers’; The Guardian, August 11, 2025

 Tracey Spicer, The Guardian; Watering down Australia’s AI copyright laws would sacrifice writers’ livelihoods to ‘brogrammers’

"My latest book, which is about artificial intelligence discriminating against people from marginalised communities, was composed on an Apple Mac.

Whatever the form of recording the first rough draft of history, one thing remains the same: they are very human stories – stories that change the way we think about the world.

A society is the sum of the stories it tells. When stories, poems or books are “scraped”, what does this really mean?

The definition of scraping is to “drag or pull a hard or sharp implement across (a surface or object) so as to remove dirt or other matter”.

A long way from Brisbane or Bangladesh, in the rarefied climes of Silicon Valley, scrapers are removing our stories as if they are dirt.

These stories are fed into the machines of the great god: generative AI. But the outputs – their creations – are flatter, less human, more homogenised. ChatGPT tells tales set in metropolitan areas in the global north; of young, cishet men and people living without disability.

We lose the stories of lesser-known characters in remote parts of the world, eroding our understanding of the messy experience of being human.

Where will we find the stories of 64-year-old John from Traralgon, who died from asbestosis? Or seven-year-old Raha from Jaipur, whose future is a “choice” between marriage at the age of 12 and sexual exploitation?

OpenAI’s creations are not the “machines of loving grace” envisioned in the 1967 poem by Richard Brautigan, where he dreams of a “cybernetic meadow”.

Scraping is a venal money grab by oligarchs who are – incidentally – scrambling to protect their own intellectual property during an AI arms race.

The code behind ChatGPT is protected by copyright, which is considered to be a literary work. (I don’t know whether to laugh or cry.)

Meta has already stolen the work of thousands of Australian writers.

Now, our own Productivity Commission is considering weakening our Copyright Act to include an exemption for text and data mining, which may well put us out of business.

In its response, The Australia Institute uses the analogy of a car: “Imagine grabbing the keys for a rental car and just driving around for a while without paying to hire it or filling in any paperwork. Then imagine that instead of being prosecuted for breaking the law, the government changed the law to make driving around in a rental car legal.”

It’s more like taking a piece out of someone’s soul, chucking it into a machine and making it into something entirely different. Ugly. Inhuman.

The commission’s report seems to be an absurdist text. The argument for watering down copyright is that it will lead to more innovation. But the explicit purpose of the Copyright Act is to protect innovation, in the form of creative endeavour.

Our work is being devalued, dismissed and destroyed; our livelihoods demolished.

In this age of techno-capitalism, it appears the only worthwhile innovation is being built by the “brogrammers”.

US companies are pinching Australian content, using it to train their models, then selling it back to us. It’s an extractive industry: neocolonialism, writ large."

Wednesday, August 13, 2025

Judge rejects Anthropic bid to appeal copyright ruling, postpone trial; Reuters, August 12, 2025

 , Reuters; Judge rejects Anthropic bid to appeal copyright ruling, postpone trial

"A federal judge in California has denied a request from Anthropic to immediately appeal a ruling that could place the artificial intelligence company on the hook for billions of dollars in damages for allegedly pirating authors' copyrighted books.

U.S. District Judge William Alsup said on Monday that Anthropic must wait until after a scheduled December jury trial to appeal his decision that the company is not shielded from liability for pirating millions of books to train its AI-powered chatbot Claude."

Monday, August 11, 2025

Boston Public Library aims to increase access to a vast historic archive using AI; NPR, August 11, 2025

 , NPR ; Boston Public Library aims to increase access to a vast historic archive using AI

"Boston Public Library, one of the oldest and largest public library systems in the country, is launching a project this summer with OpenAI and Harvard Law School to make its trove of historically significant government documents more accessible to the public.

The documents date back to the early 1800s and include oral histories, congressional reports and surveys of different industries and communities...

Currently, members of the public who want to access these documents must show up in person. The project will enhance the metadata of each document and will enable users to search and cross-reference entire texts from anywhere in the world. 

Chapel said Boston Public Library plans to digitize 5,000 documents by the end of the year, and if all goes well, grow the project from there...

Harvard University said it could help. Researchers at the Harvard Law School Library's Institutional Data Initiative are working with libraries, museums and archives on a number of fronts, including training new AI models to help libraries enhance the searchability of their collections. 

AI companies help fund these efforts, and in return get to train their large language models on high-quality materials that are out of copyright and therefore less likely to lead to lawsuits. (Microsoft and OpenAI are among the many AI players targeted by recent copyright infringement lawsuits, in which plaintiffs such as authors claim the companies stole their works without permission.)"

Saturday, August 9, 2025

News Corp CEO Robert Thomson slams AI firms for stealing copyrighted material like Trump’s ‘Art of the Deal’; New York Post, August 6, 2025

Ariel Zilber, New York Post ; News Corp CEO Robert Thomson slams AI firms for stealing copyrighted material like Trump’s ‘Art of the Deal’

"The media executive said the voracious appetite of the AI firms to train their bots on proprietary content without paying for it risks eroding America’s edge over rival nations.

“Much is made of the competition with China, but America’s advantage is ingenuity and creativity, not bits and bytes, not watts but wit,” he said.

“To undermine that comparative advantage by stripping away IP rights is to vandalize our virtuosity.”"

AI industry horrified to face largest copyright class action ever certified; Ars Technica, August 8, 2025

ASHLEY BELANGER, Ars Technica ; AI industry horrified to face largest copyright class action ever certified

"AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They've warned that a single lawsuit raised by three authors over Anthropic's AI training now threatens to "financially ruin" the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.

Last week, Anthropic petitioned to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a "rigorous analysis" of the potential class and instead based his judgment on his "50 years" of experience, Anthropic said.

If the appeals court denies the petition, Anthropic argued, the emerging company may be doomed. As Anthropic argued, it now "faces hundreds of billions of dollars in potential damages liability at trial in four months" based on a class certification rushed at "warp speed" that involves "up to seven million potential claimants, whose works span a century of publishing history," each possibly triggering a $150,000 fine.

Confronted with such extreme potential damages, Anthropic may lose its rights to raise valid defenses of its AI training, deciding it would be more prudent to settle, the company argued. And that could set an alarming precedent, considering all the other lawsuits generative AI (GenAI) companies face over training on copyrighted materials, Anthropic argued."

Wednesday, July 30, 2025

Insuring Intellectual Property – Examining AI and Fair Use; The National Law Review, July 29, 2025

 Michael S. LevineGeoffrey B. FehlingArmin GhiamMadalyn "Mady" Moore of Hunton Andrews Kurth   - Publications, The National Law Review; Insuring Intellectual Property – Examining AI and Fair Use

"The frequency of lawsuits involving the development and deployment of AI technologies is increasing by the day. Recent lawsuits seeking to hold companies directly and secondarily liable for “joint enterprises” based on use (or alleged misuse) of copyrighted works for training AI models serve as important reminders about the protections that intellectual property (IP) insurance can offer to cover the risks associated with copyright infringement claims.

Recently, a California federal district court ruled that it was “fair use” for an AI software company to use copyrighted books to train its large language models (LLMs). However, the court also found the company’s unauthorized possession of over seven million pirated books that it downloaded from the internet (apparently for free) amounted to copyright infringement independent from whether the books were ultimately used to train the LLMs. In contrast, where the company purchased books before scanning them into digital files, the use was a permissible “fair use.”

The court’s order in Bartz et al. v. Anthropic PBC, No. 3:24-cv-05417 (N.D. Cal. June 23, 2025), highlights the nuanced permissible use of copyrighted training data and underscores why policyholders engaged in the use of copyrighted material should acquire and maintain robust IP insurance that will reliably respond to claims of alleged infringement."

European Creators Slam AI Act Implementation, Warn Copyright Protections Are Failing; The Hollywood Reporter; July 30, 2025

 Scott Roxborough, The Hollywood Reporter; European Creators Slam AI Act Implementation, Warn Copyright Protections Are Failing

"The coalition is asking for the European Commission to revisit its implementation of the AI Act to ensure the law ” lives up to its promise to safeguard European intellectual property rights in the age of generative AI.”

Tuesday, July 29, 2025

Meta pirated and seeded porn for years to train AI, lawsuit says; Ars Technica, July 28, 2025

 ASHLEY BELANGER  , Ars Technica; Meta pirated and seeded porn for years to train AI, lawsuit says

"Porn sites may have blown up Meta's key defense in a copyright fight with book authors who earlier this year said that Meta torrented "at least 81.7 terabytes of data across multiple shadow libraries" to train its AI models.

Meta has defeated most of the authors' claims and claimed there is no proof that Meta ever uploaded pirated data through seeding or leeching on the BitTorrent network used to download training data. But authors still have a chance to prove that Meta may have profited off its massive piracy, and a new lawsuit filed by adult sites last week appears to contain evidence that could help authors win their fight, TorrentFreak reported.

The new lawsuit was filed last Friday in a US district court in California by Strike 3 Holdings—which says it attracts "over 25 million monthly visitors" to sites that serve as "ethical sources" for adult videos that "are famous for redefining adult content with Hollywood style and quality."

After authors revealed Meta's torrenting, Strike 3 Holdings checked its proprietary BitTorrent-tracking tools designed to detect infringement of its videos and alleged that the company found evidence that Meta has been torrenting and seeding its copyrighted content for years—since at least 2018. Some of the IP addresses were clearly registered to Meta, while others appeared to be "hidden," and at least one was linked to a Meta employee, the filing said."

Saturday, July 26, 2025

AI and copyright – the state of play, post the US AI Action Plan; PetaPixel, July 25, 2025

Chris Middleton , PetaPixel; AI and copyright – the state of play, post the US AI Action Plan


[Kip Currier: This article effectively skewers the ridiculousness and hypocrisy of the assertion of Trump and the wealthiest corporations on the planet that licensing content to fuel AI LLMs is impossible and too onerous. AI companies would never let users make use of their IP without compensation and permission. Yet, these same companies -- and now Trump via his AI Action Plan --  argue that respecting the copyrights of content holders just isn't "doable".] 

[Excerpt]

"The top six most valuable companies on Earth – in history, in fact – are all in AI and tech. Between them, NVIDIA, Microsoft, Apple, Amazon, Alphabet, and Meta already have a market capitalization of $12.9 trillion, roughly equivalent to the value of China's entire economy in 2017-18; or three times the Gross Domestic Product (GDP) of the third largest economy today, Germany, and half that of the US.

Spend trillions of dollars on planet-heating, water-guzzling AI data centers to run the likes of OpenAI's frontier models – systems that (in Trump's view) will be powered by coal? No problem. But license some books when you can scrape millions from known pirate sources? Impossible, it seems.

Whether US courts will agree with that absurd position is unknown."


Friday, July 25, 2025

Mark Cuban says the AI war ‘will get ugly’ and intellectual property ‘is KING’ in the AI world; Fortune, July 22, 2025

SYDNEY LAKE, Fortune; Mark Cuban says the AI war ‘will get ugly’ and intellectual property ‘is KING’ in the AI world

"Major tech companies are battling for AI dominance, pouring tens of billions into infrastructure and offering sky-high compensation packages. Billionaire investor Mark Cuban notes this new phase will see firms locking down valuable AI innovations and expertise rather than sharing them."

Trump’s Comments Undermine AI Action Plan, Threaten Copyright; Publishers Weekly, July 23, 2025

Ed Nawotka  , Publishers Weekly; Trump’s Comments Undermine AI Action Plan, Threaten Copyright

"Senate bill proposes 'opt-in' legislation

Trump's comments come on the heels of the introduction, by U.S. senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), of the AI Accountability and Personal Data Protection Act this past Monday following a hearing last week on AI companies' copyright infringement. The bipartisan legislation aims to hold AI firms liable for using copyrighted works or personal data without acquiring explicit consent to train AI models. It would empower individuals—including writers, artists, and content creators—to sue companies in federal court if their data or copyrighted works are used without consent. It also supports class action lawsuits and advocates for violators to pay robust penalties.

"AI companies are robbing the American people blind while leaving artists, writers, and other creators with zero recourse," said Hawley. "It’s time for Congress to give the American worker their day in court to protect their personal data and creative works. My bipartisan legislation would finally empower working Americans who now find their livelihoods in the crosshairs of Big Tech’s lawlessness."

"This bill embodies a bipartisan consensus that AI safeguards are urgent—because the technology is moving at accelerating speed, and so are dangers to privacy," added Blumenthal. "Enforceable rules can put consumers back in control of their data, and help bar abuses. Tech companies must be held accountable—and liable legally—when they breach consumer privacy, collecting, monetizing or sharing personal information without express consent. Consumers must be given rights and remedies—and legal tools to make them real—not relying on government enforcement alone."

Thursday, July 24, 2025

Donald Trump Is Fairy-Godmothering AI; The Atlantic, July 23, 2025

 Matteo Wong , The Atlantic; Donald Trump Is Fairy-Godmothering AI

"In a sense, the action plan is a bet. AI is already changing a number of industries, including software engineering, and a number of scientific disciplines. Should AI end up producing incredible prosperity and new scientific discoveries, then the AI Action Plan may well get America there faster simply by removing any roadblocks and regulations, however sensible, that would slow the companies down. But should the technology prove to be a bubble—AI products remain error-prone, extremely expensive to build, and unproven in many business applications—the Trump administration is more rapidly pushing us toward the bust. Either way, the nation is in Silicon Valley’s hands...

Once the red tape is gone, the Trump administration wants to create a “dynamic, ‘try-first’ culture for AI across American industry.” In other words, build and test out AI products first, and then determine if those products are actually helpful—or if they pose any risks.

Trump gestured toward other concessions to the AI industry in his speech. He specifically targeted intellectual-property laws, arguing that training AI models on copyrighted books and articles does not infringe upon copyright because the chatbots, like people, are simply learning from the content. This has been a major conflict in recent years, with more than 40 related lawsuits filed against AI companies since 2022. (The Atlantic is suing the AI company Cohere, for example.) If courts were to decide that training AI models with copyrighted material is against the law, it would be a major setback for AI companies. In their official recommendations for the AI Action Plan, OpenAI, Microsoft, and Google all requested a copyright exception, known as “fair use,” for AI training. Based on his statements, Trump appears to strongly agree with this position, although the AI Action Plan itself does not reference copyright and AI training.

Also sprinkled throughout the AI Action Plan are gestures toward some MAGA priorities. Notably, the policy states that the government will contract with only AI companies whose models are “free from top-down ideological bias”—a reference to Sacks’s crusade against “woke” AI—and that a federal AI-risk-management framework should “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” Trump signed a third executive order today that, in his words, will eliminate “woke, Marxist lunacy” from AI models...

Looming over the White House’s AI agenda is the threat of Chinese technology getting ahead. The AI Action Plan repeatedly references the importance of staying ahead of Chinese AI firms, as did the president’s speech: “We will not allow any foreign nation to beat us; our nation will not live in a planet controlled by the algorithms of the adversaries,” Trump declared...

But whatever happens on the international stage, hundreds of millions of Americans will feel more and more of generative AI’s influence—on salaries and schools, air quality and electricity costs, federal services and doctor’s offices. AI companies have been granted a good chunk of their wish list; if anything, the industry is being told that it’s not moving fast enough. Silicon Valley has been given permission to accelerate, and we’re all along for the ride."

Donald Trump Says AI Companies Can’t Be Expected To Pay For All Copyrighted Content Used In Their Training Models: “Not Do-Able”; Deadline, July 23, 2025

 Ted JohnsonTom Tapp, Deadline; Donald Trump Says AI Companies Can’t Be Expected To Pay For All Copyrighted Content Used In Their Training Models: “Not Do-Able”

 

[Kip Currier: Don't be fooled by the flimflam rhetoric in Trump's AI Action Plan unveiled yesterday (July 23, 2025). Where Trump's AI Action Plan says “We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas", it's actually the exact opposite: the Trump plan is censorious and will "cancel out" truth (e.g. on climate science, misinformation and disinformation, etc.) in Orwellian fashion.]


[Excerpt]

"The plan is a contrast to Trump’s predecessor, Joe Biden, who focused on the government’s role in ensuring that the technology was safe.

The Trump White House plan also recommends updating federal procurement guidelines “to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.” Also recommended is revising the National Institute of Standards and Technology AI Risk Management Framework to remove references to misinformation, DEI and climate change.

“We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas,” the plan says."

Wednesday, July 23, 2025

Trump derides copyright and state rules in AI Action Plan launch; Politico, July 23, 2025

 MOHAR CHATTERJEE , Politico; Trump derides copyright and state rules in AI Action Plan launch

"President Donald Trump criticized copyright enforcement efforts and state-level AI regulations Wednesday as he launched the White House’s AI Action Plan on a mission to dominate the industry.

In remarks delivered at a “Winning the AI Race” summit hosted by the All-In Podcast and the Hill and Valley Forum in Washington, Trump said stringent copyright enforcement was unrealistic for the AI industry and would kneecap U.S. companies trying to compete globally, particularly against China.

“You can’t be expected to have a successful AI program when every single article, book or anything else that you’ve read or studied, you’re supposed to pay for,” he said. “You just can’t do it because it’s not doable. ... China’s not doing it.”

Trump’s comments were a riff as his 28-page AI Action Plan did not wade into copyright and administration officials told reporters the issue should be left to the courts to decide.

Trump also signed three executive orders. One will fast track federal permitting, streamline reviews and “do everything possible to expedite construction of all major AI infrastructure projects,” Trump said. Another expands American exports of AI hardware and software. A third order bans the federal government from procuring AI technology “that has been infused with partisan bias or ideological agendas,” as Trump put it...

Trump echoed tech companies’ complaints about state AI laws creating a patchwork of regulation. “You can’t have one state holding you up,” he said. “We need one common sense federal standard that supersedes all states, supersedes everybody.”"

Tuesday, July 22, 2025

Commentary: A win-win-win path for AI in America; The Post & Courier, July 22, 2025

Keith Kupferschmid, The Post & Courier; Commentary: A win-win-win path for AI in America

"Contrary to claims that these AI training deals are impossible to make at scale, a robust free market is already emerging in which hundreds (if not thousands) of licensed deals between AI companies and copyright owners have been reached. New research shows it is possible to create fully licensed data sets for AI.

No wonder one federal judge recently called claims that licensing is impractical “ridiculous,” given the billions at stake: “If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders.” Just like AI companies don’t dispute that they have to pay for energy, infrastructure, coding teams and the other inputs their operations require, they need to pay for creative works as well.

America’s example to the world is a free-market economy based on the rule of law, property rights and freedom to contract — so, let the market innovate solutions to these new (but not so new) licensing challenges. Let’s construct a pro-innovation, pro-worker approach that replaces the false choice of the AI alarmists with a positive, pro-America pathway to leadership on AI."

Senators Introduce Bill To Restrict AI Companies’ Unauthorized Use Of Copyrighted Works For Training Models; Deadline, July 21, 2025

Ted Johnson , Deadline; Senators Introduce Bill To Restrict AI Companies’ Unauthorized Use Of Copyrighted Works For Training Models

"Sen. Josh Hawley (R-MO) and Sen. Richard Blumenthal (D-CT) introduced legislation on Monday that would restrict AI companies from using copyrighted material in their training models without the consent of the individual owner.

The AI Accountability and Personal Data Protection Act also would allow individuals to sue companies that uses their personal data or copyrighted works without their “express, prior consent.”

The bill addresses a raging debate between tech and content owners, one that has already led to extensive litigation. Companies like OpenAI have argued that the use of copyrighted materials in training models is a fair use, while figures including John Grisham and George R.R. Martin have challenged that notion."

Sunday, July 20, 2025

AI guzzled millions of books without permission. Authors are fighting back.; The Washington Post, July 19, 2025

  , The Washington Post; AI guzzled millions of books without permission. Authors are fighting back.


[Kip Currier: I've written this before on this blog and I'll say it again: technology companies would never allow anyone to freely vacuum up their content and use it without permission or compensation. Period. Full Stop.]


[Excerpt]

"Baldacci is among a group of authors suing OpenAI and Microsoft over the companies’ use of their work to train the AI software behind tools such as ChatGPT and Copilot without permission or payment — one of more than 40 lawsuits against AI companies advancing through the nation’s courts. He and other authors this week appealed to Congress for help standing up to what they see as an assault by Big Tech on their profession and the soul of literature.

They found sympathetic ears at a Senate subcommittee hearing Wednesday, where lawmakers expressed outrage at the technology industry’s practices. Their cause gained further momentum Thursday when a federal judge granted class-action status to another group of authors who allege that the AI firm Anthropic pirated their books.

“I see it as one of the moral issues of our time with respect to technology,” Ralph Eubanks, an author and University of Mississippi professor who is president of the Authors Guild, said in a phone interview. “Sometimes it keeps me up at night.”

Lawsuits have revealed that some AI companies had used legally dubious “torrent” sites to download millions of digitized books without having to pay for them."

Wednesday, July 16, 2025

Can Gen AI and Copyright Coexist?; Harvard Business Review, July 16, 2025

 and , Harvard Business Review ; Can Gen AI and Copyright Coexist?

"We’re experts in the study of digital transformation and have given this issue a lot of thought. We recently served, for example, on a roundtable of 10 economists convened by the U.S. Copyright Office to study the implications of gen AI on copyright policy. We recognize that the two decisions are far from the last word on this topic; both will no doubt be appealed to the Ninth Circuit and then subsequently to the Supreme Court. But in the meantime, we believe there are already many lessons to be learned from these decisions about the implications of gen AI for business—lessons that will be useful for leaders in both the creative industries and gen AI companies."