Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Thursday, February 5, 2026

When AI and IP Collide: What Journalists Need to Know; National Press Foundation (NPF), January 22, 2026

National Press Foundation (NPF); When AI and IP Collide: What Journalists Need to Know

"With roughly 70 federal lawsuits waged against AI developers, the intersection of technology and intellectual property is now one of the most influential legal beats. Courts are jumping in to define the future of “fair use.” To bridge the gap between complex legal proceedings and the public’s understanding, NPF held a webinar to unpack these intellectual property battles. One thing all of the expert panelists agreed on: most cases are either an issue of input – i.e. what the AI models pull in to train on – or output – what AI generates, as in the case of Disney and other Hollywood studios v. Midjourney.

“The behavior here of AI companies and the assertion of fair use is completely understandable in our market capitalist system – all players want something very simple. They want their inputs for little or nothing and their outputs to be very expensive,” said Loyola Law professor Justin Hughes. “The fair use argument is all about AI companies wanting their inputs to be free, just like ranchers want their grazing land from the federal government to be free or their mining rights to be free.” AI Copyright Cases Journalists Should Know: Bartz et al. v. Anthropic: Anthropic reached a $1.5 billion settlement in a landmark case for the industry after a class of book authors accused the company of using pirated books to train the Claude AI model. “The mere training itself may be fair use, but the retention of these large copy data sets and their replication or your training from having taken pirated data sets, that’s not fair use,” Hughes explained. The NYT Company v. Microsoft Corporation et al.: This is a massive multi-district litigation in New York where the NYT is suing OpenAI. The Times has pushed for discovery into over 20 million private ChatGPT logs to prove that this model is being used to get past paywalls. Advance Local Media LLC et al. v. Cohere Inc.: The case against the startup Cohere is particularly vital for newsrooms as a judge ruled that AI-generated summaries infringe of news organizations’ ability to get traffic on their sites. “We’ve seen, there’s been a lot of developers who have taken the kind of classic Silicon Valley approach of ask forgiveness rather than permission,” said Terry Hart, general counsel of the Association of American Publishers. “They have gone ahead and trained a lot of models using a lot of copyrighted works without authorization.” Tech companies have trained massive models to ingest the entirety of the internet, including articles, without prior authorization, and Hughes points out that this is a repeated occurrence. AI companies often keep unauthorized copies of these vast datasets to retrain and tweak their models, leading to multiple steps of reproduction that could violate copyright. AI and U.S. Innovation A common defense from tech companies in Silicon Valley is that using these vast amounts of data is necessary to U.S. innovation and keeping the economy competitive. “‘We need to beat China, take our word for it, this is going to be great, and we’re just going to cut out a complete sector of the economy that’s critical to the success of our models,'” Hart said. “In the long run, that’s not good for innovation. It’s not good for the creative sectors and it’s not good for the AI sector.” Reuters technology reporter Deepa Seetharaman has also heard the China competition argument, among others. “The metaphor that I’ll hear a lot here is, ‘it’s like somebody visiting a library and reading every book, except this is a system that can remember every book and remember all the pieces of every book. And so why are you … harming us for developing something that’s so capable?'” Seetharaman said. Hughes noted that humans are not walking into a library with a miniature high-speed photocopier to memorize every book. Humans don’t memorize with the “faithful” precision of a machine. Hart added that the metaphor breaks down because technology has created a new market space that isn’t comparable to a human reader. Speakers:
  • Wayne Brough, Resident Senior Fellow, Technology and Innovation Team, R Street
  • Terry Hart, General Counsel, Association of American Publishers
  • Justin Hughes, Honorable William Matthew Byrne Distinguished Professor of Law, Loyola Marymount University
  • Deepa Seetharaman, Tech Correspondent, Reuters
Summary and transcript: https://nationalpress.org/topic/when-... This event is sponsored by The Copyright Alliance and NSG Next Solutions Group. This video was produced within the Evelyn Y. Davis studios. NPF is solely responsible for the content."

‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI; The Guardian, February 5, 2026

 Anuj Behal, The Guardian; ‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI


[Kip Currier: The largely unaddressed plight of content moderators became more real for me after reading this haunting 9/9/24 piece in the Washington Post, "I quit my job as a content moderator. I can never go back to who I was before."

As mentioned in the graphic article's byline, content moderator Alberto Cuadra spoke with journalist Beatrix Lockwood. Maya Scarpa's illustrations poignantly give life to Alberto Cuadra's first-hand experiences and ongoing impacts from the content moderation he performed for an unnamed tech company. I talk about Cuadra's experiences and the ethical issues of content moderation, social media, and AI in my Ethics, Information, and Technology book.]


[Excerpt]

"Murmu, 26, is a content moderator for a global technology company, logging on from her village in India’s Jharkhand state. Her job is to classify images, videos and text that have been flagged by automated systems as possible violations of the platform’s rules.

On an average day, she views up to 800 videos and images, making judgments that train algorithms to recognise violence, abuse and harm.

This work sits at the core of machine learning’s recent breakthroughs, which rest on the fact that AI is only as good as the data it is trained on. In India, this labour is increasingly performed by women, who are part of a workforce often described as “ghost workers”.

“The first few months, I couldn’t sleep,” she says. “I would close my eyes and still see the screen loading.” Images followed her into her dreams: of fatal accidents, of losing family members, of sexual violence she could not stop or escape. On those nights, she says, her mother would wake and sit with her...

“In terms of risk,” she says, “content moderation belongs in the category of dangerous work, comparable to any lethal industry.”

Studies indicate content moderation triggers lasting cognitive and emotional strain, often resulting in behavioural changes such as heightened vigilance. Workers report intrusive thoughts, anxiety and sleep disturbances.

A study of content moderators published last December, which included workers in India, identified traumatic stress as the most pronounced psychological risk. The study found that even where workplace interventions and support mechanisms existed, significant levels of secondary trauma persisted."

Tuesday, February 3, 2026

‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report; The Guardian, February 3, 2026

 , The Guardian; ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

"The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.

Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.

Here are some of the key points from the second annual report, published on Tuesday. It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments. Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month...

1. The capabilities of AI models are improving...


2. Deepfakes are improving and proliferating...


3. AI companies have introduced biological and chemical risk safeguards...


4. AI companions have grown rapidly in popularity...


5. AI is not yet capable of fully autonomous cyber-attacks...


6. AI systems are getting better at undermining oversight...


7. The jobs impact remains unclear"

Friday, January 30, 2026

The $1.5 Billion Reckoning: AI Copyright and the 2026 Regulatory Minefield; JD Supra, January 27, 2026

Rob Robinson, JD Supra ; The $1.5 Billion Reckoning: AI Copyright and the 2026 Regulatory Minefield

"In the silent digital halls of early 2026, the era of “ask for forgiveness later” has finally hit a $1.5 billion brick wall. As legal frameworks in Brussels and New Delhi solidify, the wild west of AI training data is being partitioned into clearly marked zones of liability and license. For those who manage information, secure data, or navigate the murky waters of eDiscovery, this landscape is no longer a theoretical debate—it is an active regulatory battlefield where every byte of training data carries a price tag."

Music publishers sue Anthropic for $3B over ‘flagrant piracy’ of 20,000 works; TechCrunch, January 29, 2026

Amanda Silberling, TechCrunch; Music publishers sue Anthropic for $3B over ‘flagrant piracy’ of 20,000 works 

"A cohort of music publishers led by Concord Music Group and Universal Music Group are suing Anthropic, saying the company illegally downloaded more than 20,000 copyrighted songs, including sheet music, song lyrics, and musical compositions.

The publishers said in a statement on Wednesday that the damages could amount to more than $3 billion, which would be one of the largest non-class action copyright cases filed in U.S. history.

This lawsuit was filed by the same legal team from the Bartz v. Anthropic case, in which a group of fiction and nonfiction authors similarly accused the AI company of using their copyrighted works to train products like Claude."

Sunday, January 25, 2026

How researchers got AI to quote copyrighted books word for word; Le Monde, January 24, 2026

 , Le Monde; How researchers got AI to quote copyrighted books word for word

"Where does artificial intelligence acquire its knowledge? From an enormous trove of texts used for training. These typically include vast numbers of articles from Wikipedia, but also a wide range of other writings, such as the massive Books3 dataset, which aggregates nearly 200,000 books without the authors' permission. Some proponents of conversational AI present these training datasets as a form of "universal knowledge" that transcends copyright law, adding that, protected or not, AIs do not memorize these works verbatim and only store fragmented information.

This argument has been challenged by a series of studies, the latest of which, published in early January by researchers at Stanford University and Yale University, is particularly revealing. Ahmed Ahmed and his coauthors managed to prompt four mainstream AI programs, disconnected from the internet to ensure no new information was retrieved, to recite entire pages from books."

Friday, January 23, 2026

Actors And Musicians Help Launch “Stealing Isn’t Innovation” Campaign To Protest Big Tech’s Use Of Copyrighted Works In AI Models; Deadline, January 22, 2026

Ted Johnson , Deadline; Actors And Musicians Help Launch “Stealing Isn’t Innovation” Campaign To Protest Big Tech’s Use Of Copyrighted Works In AI Models

"A long list of musicians, content creators and actors are among those who have signed on to a new campaign to protest tech giants’ use of copyrighted works in their AI models.

The list of signees includes actors like Scarlett Johansson and Cate Blanchett, music groups like REM and authors like Brad Meltzer. 

The ‘Stealing Isn’t Innovation” campaign is being led by the Human Artistry Campaign. It states that “respect and protect” the Creative community, “some of the biggest tech companies, many backed by private equity and other funders, are using American creators’ work to build AI platforms without authorization or regard for copyright law.”"

Copyright Law Set to Govern AI Under Trump’s Executive Order; Bloomberg Law, January 23, 2026

Michael McLaughlin , Bloomberg Law; Copyright Law Set to Govern AI Under Trump’s Executive Order


[Kip Currier: I posted this Bloomberg Law article excerpt to the Canvas site for the graduate students in my Intellectual Property and Open Movements course this term, along with the following note:

Copyright law is the potential giant-slayer vis-a-vis AI tech companies that have used copyrighted works as AI training data, without permission or compensation.

Information professionals who have IP acumen (e.g. copyright law and fair use familiarity) will have vital advantages on the job market and in their organizations.]


[Excerpt] 

"The legal landscape for artificial intelligence is entering a period of rapid consolidation. With President Donald Trump’s executive order in December 2025 establishing a national AI framework, the era of conflicting state-level rules may be drawing to a close.

But this doesn’t signal a reduction in AI-related legal risk. It marks the beginning of a different kind of scrutiny—one centered not on regulatory innovation but on the most powerful legal instrument already available to federal courts: copyright law.

The lesson emerging from recent AI litigation, most prominently Bartz v. Anthropic PBC, is that the greatest potential liability to AI developers doesn’t come from what their models generate. It comes from how those models were trained, and from the provenance of the content used in that training.

As the federal government asserts primacy over AI governance, the decisive question will be whether developers can demonstrate that their training corpora were acquired lawfully, licensed appropriately (unless in the public domain), and documented thoroughly."

It Makes Sense That People See A.I. as God; The New York Times, January 23, 2026

 , The New York Times; It Makes Sense That People See A.I. as God

"More and more, when it comes to our relationships with A.I. and the complex algorithms that shape so much of our modern subjectivity, we have slipped into the language and habits of mind we normally reserve for deities. And even people who do not make an explicit connection between A.I. and religion engage a kind of religious mode around the new technology."

Wednesday, January 21, 2026

Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss; The Guardian, January 21, 2026

  and  , The Guardian; Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss

"Jamie Dimon, the boss of JP Morgan, has said artificial intelligence “may go too fast for society” and cause “civil unrest” unless governments and business support displaced workers.

While advances in AI will have huge benefits, from increasing productivity to curing diseases, the technology may need to be phased in to “save society”, he said...

Jensen Huang, the chief executive of the semiconductor maker Nvidia, whose chips are used to power many AI systems, argued that labour shortages rather than mass payoffs were the threat.

Playing down fears of AI-driven job losses, Huang told the meeting in Davos that “energy’s creating jobs, the chips industry is creating jobs, the infrastructure layer is creating jobs … jobs, jobs, jobs”...

Huang also argued that AI robotics was a “once-in-a-generation” opportunity for Europe, as the region had an “incredibly strong” industrial manufacturing base."

Tuesday, January 20, 2026

FREE WEBINAR: REGISTER: AI, Intellectual Property and the Emerging Legal Landscape; National Press Foundation, Thursday, January 22, 2026

National Press Foundation; REGISTER: AI, Intellectual Property and the Emerging Legal Landscape

"Artificial intelligence is colliding with U.S. copyright law in ways that could reshape journalism, publishing, software, and the creative economy.

The intersection of AI and intellectual property has become one of the most consequential legal battles of the digital age, with roughly 70 federal lawsuits filed against AI companies and copyright claims on works ranging from literary and visual work to music and sound recording to computer programs. Billions of dollars are at stake.

Courts are now deciding what constitutes “fair use,” whether and how AI companies may use copyrighted material to build models, what licensing is required, and who bears responsibility when AI outputs resemble protected works. The legal decisions will shape how news, art, and knowledge are produced — and who gets paid for them.

To help journalists better understand and report on the developing legal issues of AI and IP, join the National Press Foundation and a panel of experts for a wide-ranging discussion around the stakes, impact and potential solutions. Experts in technology and innovation as well as law and economics join journalists in this free online briefing 12-1 p.m. ET on Thursday, January 22, 2026."

Monday, January 19, 2026

AI companies will fail. We can salvage something from the wreckage; The Guardian, January 18, 2026

, The Guardian; AI companies will fail. We can salvage something from the wreckage

"The growth narrative of AI is that AI will disrupt labor markets. I use “disrupt” here in its most disreputable tech-bro sense.

The promise of AI – the promise AI companies make to investors – is that there will be AI that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself and give the other half to the AI company.

That is the $13tn growth story that Morgan Stanley is telling. It’s why big investors are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family’s financial security.

Now, if AI could do your job, this would still be a problem. We would have to figure out what to do with all these unemployed people.

But AI can’t do your job. It can help you do your job, but that does not mean it is going to save anyone money...

After more than 20 years of being consistently wrong and terrible for artists’ rights, the US Copyright Office has finally done something gloriously, wonderfully right. All through this AI bubble, the Copyright Office has maintained – correctly – that AI-generated works cannot be copyrighted, because copyright is exclusively for humans. That is why the “monkey selfie” is in the public domain. Copyright is only awarded to works of human creative expression that are fixed in a tangible medium.

And not only has the Copyright Office taken this position, they have defended it vigorously in court, repeatedly winning judgments to uphold this principle.

The fact that every AI-created work is in the public domain means that if Getty or Disney or Universal or Hearst newspapers use AI to generate works – then anyone else can take those works, copy them, sell them or give them away for nothing. And the only thing those companies hate more than paying creative workers, is having other people take their stuff without permission...

AI is a bubble and bubbles are terrible.

Bubbles transfer the life savings of normal people who are just trying to have a dignified retirement to the wealthiest and most unethical people in our society, and every bubble eventually bursts, taking their savings with it."

Sunday, January 18, 2026

Publishers seek to join lawsuit against Google over AI training; Reuters, January 15, 2026

  , Reuters; Publishers seek to join lawsuit against Google over AI training

"Publishers Hachette Book Group and Cengage Group asked a California federal court on Thursday for permission to intervene in a proposed class action lawsuit against Google over the alleged misuse of copyrighted material used to train its artificial intelligence systems.

The publishers said in their proposed complaint that the tech company "engaged in one of the most prolific infringements of copyrighted materials in history" to build its AI capabilities, copying content from Hachette books and Cengage textbooks without permission...

The lawsuit currently involves groups of visual artists and authors who sued Google for allegedly misusing their work to train its generative AI systems. The case is one of many high-stakes lawsuits brought by artists, authors, music labels and other copyright owners against tech companies over their AI training."

Publishers seek to join lawsuit against Google over AI training; Reuters, January 15, 2026

 , Reuters; Publishers seek to join lawsuit against Google over AI training

"Publishers Hachette Book Group and Cengage Group asked a California federal court on Thursday for permission to intervene in a proposed class action lawsuit against Google over the alleged misuse of copyrighted material used to train its artificial intelligence systems.

The publishers said in their proposed complaint that the tech company "engaged in one of the most prolific infringements of copyrighted materials in history" to build its AI capabilities, copying content from Hachette books and Cengage textbooks without permission...

The lawsuit currently involves groups of visual artists and authors who sued Google for allegedly misusing their work to train its generative AI systems. The case is one of many high-stakes lawsuits brought by artists, authors, music labels and other copyright owners against tech companies over their AI training."

Saturday, January 17, 2026

Public Shame Is the Most Effective Tool for Battling Big Tech; The New York Times, January 14, 2026

 , The New York Times; Public Shame Is the Most Effective Tool for Battling Big Tech

"It might be harder to shame the tech companies themselves into making their products safer, but we can shame third-party companies like toymakers, app stores and advertisers into ending partnerships. And with enough public disapproval, legislators might be inspired to act.

In some of the very worst corners of the internet might lie some hope...

Without more public shaming, what seems to be the implacable forward march of A.I. is unstoppable...

As Jay Caspian Kang noted in The New Yorker recently, changing social norms around kids and tech use can be powerful, and reforms like smartphone bans in schools have happened fairly quickly, and mostly on the state and local level."


Friday, January 16, 2026

AI’S MEMORIZATION CRISIS: Large language models don’t “learn”—they copy. And that could change everything for the tech industry.; The Atlantic, January 9, 2026

 Alex Reisner, The Atlantic; AI’S MEMORIZATION CRISISLarge language models don’t “learn”—they copy. And that could change everything for the tech industry

"On tuesday, researchers at Stanford and Yale revealed something that AI companies would prefer to keep hidden. Four popular large language models—OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—have stored large portions of some of the books they’ve been trained on, and can reproduce long excerpts from those books."

Wednesday, January 14, 2026

Britain seeks 'reset' in copyright battle between AI and creators; Reuters, January 13, 2026

Reuters; Britain seeks 'reset' in copyright battle between AI and creators

"British technology minister Liz Kendall said on Tuesday the government was seeking a "reset" on plans to overhaul copyright rules to accommodate artificial intelligence, pledging to protect creators while unlocking AI's economic potential.

Creative industries worldwide are grappling with legal and ethical challenges posed by AI systems that generate original content after being trained on popular works, often without compensating the original creators."

 

Tuesday, January 13, 2026

‘Clock Is Ticking’ For Creators On AI Content Copyright Claims, Experts Warn; Forbes, January 9, 2026

 Rob Salkowitz, Forbes; ‘Clock Is Ticking’ For Creators On AI Content Copyright Claims, Experts Warn

"Despite this string of successes, creators like BT caution that content owners need to move quickly to secure any kind of terms. “A lot of artists have their heads in the sand with respect to AI,” he said. “The fact is, if they don’t come to some kind of agreement, they may end up with nothing.”

The concern is that AI models are increasingly being trained on synthetic data: that is, on the output of AI systems, rather than on content attributable to any individual creator or rights owner. Gartner estimates that 75% of AI training data in 2026 will be synthetic. That number could hit 100% by 2030. Once the tech companies no longer need human-produced content, they will stop paying for it.

“The quality of outputs from AI systems has been improving dramatically, which means that it is possible to train on synthetic data without risking model collapse,” said Dr. Daniela Braga, founder and CEO of the data training firm Defined.ai, in a separate interview at CES. “The window is definitely closing for individual rights owners to secure favorable terms.”

Other experts suggest that these claims may be overstated.

Braga says the best way creators can protect themselves is to do business with ethical companies willing to provide compensation for high-quality human-produced content and represent the superior value of that content to their customers. As models grow in capabilities, the need will shift from sheer volume of data to data that is appropriately tagged and annotated to fit easily into specific use cases.

There remain some profound questions around the sustainability of AI from a business standpoint, with demand for services among enterprise and consumers lagging the massive, and massively expensive, build-out of capacity. For some artists opposed to generative AI in its entirety, there may be the temptation to wait it out until the bubble bursts. After all, these artists created their work to be enjoyed by humans, not to be consumed in bulk by machines threatening their livelihoods. In light of those objections, the prospect of a meager payout might seem unappealing."

Sunday, January 11, 2026

‘Add blood, forced smile’: how Grok’s nudification tool went viral; The Guardian, January 11, 2026

  and The Guardian; ‘Add blood, forced smile’: how Grok’s nudification tool went viral

"This unprecedented mainstreaming of nudification technology triggered instant outrage from the women affected, but it was days before regulators and politicians woke up to the enormity of the proliferating scandal. The public outcry raged for nine days before X made any substantive changes to stem the trend. By the time it acted, early on Friday morning, degrading, non-consensual manipulated pictures of countless women had already flooded the internet."

Monday, January 5, 2026

AI copyright battles enter pivotal year as US courts weigh fair use; Reuters, January 5, 2026

  , Reuters; AI copyright battles enter pivotal year as US courts weigh fair use

"The sprawling legal fight over tech companies' vast copying of copyrighted material to train their artificial intelligence systems could be entering a decisive phase in 2026.

After a string of fresh lawsuits and a landmark settlement in 2025, the new year promises to bring a wave of rulings that could define how U.S. copyright law applies to generative AI. At stake is whether companies like OpenAI, Google and Meta can rely on the legal doctrine of fair use to shield themselves from liability – or if they must reimburse copyright holders, which could cost billions."