Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Saturday, May 16, 2026

Anthropic’s $1.5B copyright settlement is getting messy as judge delays approval; Ars Technica, May 15, 2026

ASHLEY BELANGER  , Ars Technica; Anthropic’s $1.5B copyright settlement is getting messy as judge delays approval

"After several authors and class members raised objections to Anthropic’s $1.5 billion settlement over its widespread book piracy to train AI, a federal judge has delayed final approvals of the settlement.

On Thursday, US District Judge Araceli Martinez-Olguin declined to rubber-stamp what’s regarded as the largest copyright settlement in US history. Instead, she wanted to better understand why some class members were objecting and opting out of the settlement. So, she asked authors to address key concerns of objectors, who argued that lawyers’ compensation was way too high and payments to class members were a “pittance.”...

Objectors may not win every fight, but they have seemingly persuaded the court to at least entertain their strongly worded pleas, including warnings that the settlement may not survive an appeal if the terms aren’t re-examined. Notably, their objections came shortly before a group of 25 class members opting out of the settlement filed a new lawsuit, showing that Anthropic is not done fighting these claims.

“For the Court to agree that counsel’s request of nearly a third of a billion dollars, while individual plaintiffs settle for a pittance of available compensation and no protections against future abuse is an aberration of civil justice and a slap in the face to all those who labored to publish their works,” Story said. “Such a decision would also further the too-often-observed stereotype that … class-action Plaintiffs are merely tools used to obtain Powerball-size payouts to attorneys.”

Judge William Alsup, who initially approved the settlement but has since retired, also questioned whether the lawyers’ fees were too high. Worried that the settlement was being “shoved down the throat of authors,” he recommended an independent investigation to ensure no improper attorneys’ fees would be granted, but according to Lea Bishop, a non-class member objector and professor of copyright law, the recommendation “was not squarely disclosed to incoming Judge Martinez-Olguin” in a status report submitted by authors’ lawyers. Additionally, class members weren’t notified of the investigation.

Authors must respond to objections raised by May 21, when Anthropic will also have to file a brief explaining “why late opt outs should not be honored,” the judge ordered."

Friday, May 15, 2026

Authors, publishers near final approval of $1.5 billion Anthropic copyright settlement; Courthouse News Service, May 14, 2026

  , Courthouse News Service; Authors, publishers near final approval of $1.5 billion Anthropic copyright settlement

"Judge Araceli Martínez-Olguín, a Joe Biden appointee, allowed objectors to address the court, where several spoke about the concerns they had with how the plaintiffs put together the eligible works list.

One class member told the judge that the works list undercounts the number of eligible works in the class by treating each copyright registration number as a single work, regardless of how many books are covered by the registration. The class member explained that she has certain group copyright registration numbers that include 40 separate, independently published novels under one registration number, all of which were downloaded by Anthropic without permission. However, under the current terms of the settlement agreement, the novels would be considered just one claimable work.

Another class member spoke to the exclusion of works that were published under a pseudonym, disadvantageous to small publishers and self-published authors in the class, while a third said they believed a one-time payment was not enough because Anthropic was continuing to profit off the copyrighted work they stole.

James H. Bartolomei III of Duncan Firm, an attorney representing four other objectors, asked the court to reopen the opt-out period as certain key documents from the case were only uploaded to the settlement website recently.

“Nothing I am asking for takes a dollar away from any class member who filed a claim. I’m not asking the court to stop the settlement from ever being approved. Just for sufficient information to make an informed choice,” he said."

Why We Keep Tricking Ourselves Into Thinking A.I. Is Conscious; The New York Times, May 15, 2026

, The New York Times; Why We Keep Tricking Ourselves Into Thinking A.I. Is Conscious

"Whenever there are large-scale shifts in media, humans have to adapt their cultural habits. Film and radio, for instance, meant voices of people not physically in the room with you may echo through. Adapting our reading practices to large language model output is a shift just like that one, where we change what we normally expect from our surroundings. We don’t expect meaningful and rhetorically powerful prose to come from anything but a conscious mind. But now it does. We cannot afford to believe the marketing message from A.I. companies that we may be dealing with some spiritual essence. In the age of cultural A.I., technical expertise alone won’t save us. We’ll have to add a new form of reading to make sense of our new world."

Committee publishes Government response to AI and copyright report; UK Parliament, May 15, 2026

UK Parliament; Committee publishes Government response to AI and copyright report

 "Chair’s response

Baroness Keeley, Chair of the Communications and Digital Committee, said:

“I am pleased that the Government’s response today confirms what it set out in March: that it no longer has a preference for introducing a broad copyright exception for AI training with an opt-out mechanism. It was clear that this approach would have been unworkable and placed an unfair burden on individual rightsholders. 

“The UK AI licensing market is emerging, and the Government now needs to create the conditions that will enable it to flourish. It can do this by ruling out any reform to copyright law that removes incentives to license, including proposals for a new commercial research exception, and committing to statutory transparency requirements for large AI developers. The Government says it will develop ‘best practice’ to encourage developers to be more transparent about the sources they use to train their models, but best practice alone will not promote licensing, drive compliance or enable robust enforcement. Only a mandatory framework will create the level playing field needed to foster responsible training data practices. 

“The coming months will be crucial in establishing a clear direction on AI and copyright for all stakeholders. The recently launched Sovereign AI Fund gives the Government the opportunity to insist on enhanced transparency and respect for copyright from the high-potential AI companies it supports. We will keep up a continued dialogue with Ministers to ensure that the Government is using all available levers, and demonstrating the necessary ambition, to drive meaningful progress on this issue.”

Monday, May 11, 2026

Google Says Criminal Hackers Used A.I. to Find a Major Software Flaw; The New York Times, May 11, 2026

 , The New York Times ; Google Says Criminal Hackers Used A.I. to Find a Major Software Flaw

"A criminal hacking group recently attempted to launch a widespread cyberattack that appeared to rely on artificial intelligence to detect a previously unknown bug, Google said in research published Monday, highlighting the potential threat that A.I. poses to digital security.

Security experts have feared for years that malicious hackers could eventually rely on A.I. models to identify undisclosed flaws in computer code to launch crippling attacks that are difficult to guard against. That fear was largely theoretical until now.

“We have high confidence that the actor likely leveraged an A.I. model to support the discovery and weaponization of this vulnerability,” the report said.

The tech giant did not say precisely when the thwarted attack happened, whom it was targeting or which A.I. platform the hackers used, but the company added that it did not believe it was its own Gemini chatbot."

Sunday, May 10, 2026

A.I. Populism Is Here. And No One Is Ready.; The New York Times, May 8, 2026

, The New York Times ; A.I. Populism Is Here. And No One Is Ready. Silicon Valley oligarchs worried about the risks their technology posed to the world. They forgot about people.

"In one sense, the vision peddled by A.I. companies is remarkably depersonalized: We hand more and more responsibility and judgment off to superintelligent black boxes, which rapidly begin shaping the course of the human future with decisions that remain illegible to the rest of us, including their designers. “People outside the field are often surprised and alarmed to learn that we do not understand how our own A.I. creations work,” Anthropic’s Dario Amodei wrote last year. “They are right to be concerned: This lack of understanding is essentially unprecedented in the history of technology.”

In another sense, and in the meantime, A.I. represents perhaps the most personalized sales pitch ever foisted on the passive American consumer — a vision of a near-total takeover of the country’s economic, social and cognitive lives by tools engineered by just five companies, run by five particular people, several of whom are widely described as sociopaths. The list is so short that you may know most of them by first name: Sam, Dario, Elon and Mark. (Demis Hassabis, who runs Google’s DeepMind, is perhaps less famous.)

These men are all already billionaires, or close to it, and on their current trajectories their wealth and influence look set to expand exponentially as, around them, anti-elitism multiplies, too. Perhaps this is one reason 50 percent of Americans told the Pew Research Center last year they were more concerned than excited about what’s to come from A.I. Only 10 percent said they were more excited. That is a yawning gap into which an entire society is being asked to tumble."

Saturday, May 9, 2026

Tech is turning increasingly to religion in a quest to create ethical AI; AP, May 7, 2026

 KRYSTA FAURIA, AP; Tech is turning increasingly to religion in a quest to create ethical AI

"As concerns mount over artificial intelligence and its rapid integration into society, tech companies are increasingly turning to faith leaders for guidance on how to shape the technology — a surprising about-face on Silicon Valley’s longstanding skepticism of organized religion.

Leaders from various religious groups met last week with representatives from companies including Anthropic and OpenAI for the inaugural “Faith-AI Covenant” roundtable in New York to discuss how best to infuse morality and ethics into the fast-developing technology. It was organized by the Geneva-based Interfaith Alliance for Safer Communities, which seeks to take on issues such as extremism, radicalization and human trafficking. The roundtable is expected to be the first of several around the globe, including in Beijing, Nairobi and Abu Dhabi.

Tech executives need to recognize their power — and their responsibility — to make the right decisions, said Baroness Joanna Shields, a key partner in the initiative. She worked as a tech executive with stints at Google and Facebook before pivoting to British politics.

“Regulation can’t keep up with this,” she said. But the leaders of the world’s religions, with billions of followers globally, have the “expertise of shepherding people’s moral safety,” she reasoned. Faith leaders ought to have a voice, Shields said.

“This dialogue, this direct connection is so important because the people who are building this understand the power and capabilities of what they’re building and they want to do it right — most of them,” she said of AI tech executives.

The goal of this initiative, according to Shields, is an eventual “set of norms or principles” informed by different groups and faiths, from Christians to Sikhs to Buddhists, that companies will abide by...

Present at the meeting were a variety of faith groups, including representatives from the Hindu Temple Society of North America, the Baha’i International Community, The Sikh Coalition, the Greek Orthodox Archdiocese of America and The Church of Jesus Christ of Latter-day Saints, widely known as the Mormon church...

The partnership highlights a growing coalition between faith and tech, born out of an effort to create moral AI — a contested concept which begs questions about whether that is possible and what it means...

“There’s some aspect of PR to it. The slogan was ‘Move fast and break things.’ And they broke too many things and too many people,” said Brian Boyd, the U.S. faith liaison for the nonprofit Future of Life Institute...

But other advocates for AI regulation and safety aren’t so sure these efforts are genuine.

“At best it’s a distraction. At worst it’s diverting attention from things that really matter,” said Rumman Chowdhury, the CEO of the nonprofit Humane Intelligence and the U.S. science envoy for AI under the Biden administration.

Chowdhury says she’s not inclined to believe religion is the best place to help answer questions surrounding AI and ethics, but thinks she understands why companies are increasingly turning to it.

“I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” she said. “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”"

Friday, May 8, 2026

Meta’s AI Copyright Fight Just Escalated and Hollywood Is Watching Closely; Los Angeles Magazine, May 7, 2026

 , Los Angeles Magazine; Meta’s AI Copyright Fight Just Escalated and Hollywood Is Watching Closely

A new lawsuit against Mark Zuckerberg and Meta could reshape how studios, publishers and tech companies train the next generation of artificial intelligence

"The AI Gold Rush Is Running Into Copyright Law

According to the lawsuit filed in Manhattan federal court, Meta allegedly pulled material from massive libraries of pirated books and scraped internet content to train Llama, the company’s flagship large language model. Publishers argue the practice amounts to one of the largest copyright violations in modern history."

Thursday, May 7, 2026

Scott Turow's latest real-life legal thriller: Suing Meta for copyright infringement; NPR, May 5, 2026

, NPR ; Scott Turow's latest real-life legal thriller: Suing Meta for copyright infringement

""All Americans should understand that the bold future promised by A.I., has been, to paraphrase the investigative writer Alex Reisner, created with stolen words," said Turow in a statement to NPR. "It is all the more shameful that these violations of the law were undertaken by one of the richest corporations in the world."

According to the complaint, Meta "briefly considered licensing deals with major publishers" but changed its strategy in April 2023. The question of whether to license or pirate moving forward was "escalated" to Zuckerberg, after which, the complaint alleges, Meta's business development team received verbal instructions to stop licensing efforts. "If we license once [sic] single book, we won't be able to lean into the fair use strategy," a Meta employee is quoted as saying in the complaint.

"It's the most flagrant copyright breach in history," said Authors Guild CEO Mary Rasenberger in a statement to NPR. "And these voracious tech companies need to be held accountable.""

Wednesday, May 6, 2026

Publishers sue Meta, claiming it violated copyrights in training AI with their books; The Washington Post, May 5, 2026

, The Washington Post; Publishers sue Meta, claiming it violated copyrights in training AI with their books

"The case, filed in the U.S. District Court for the Southern District of New York, is the latest in a string of lawsuits brought by publishers, authors, artists, photographers and news outlets aimed at forcing tech companies to compensate them for using their works to train their AI models. The plaintiffs argue in the lawsuit that the AI model’s ability to quickly produce knockoffs and summaries of copyrighted books threatens the livelihoods of publishers and authors.

A Meta spokesperson said in a statement that the company would “fight this lawsuit aggressively.”

“AI is powering transformative innovations, productivity and creativity for individuals and companies, and courts have rightly found that training AI on copyrighted material can qualify as fair use,” the spokesperson said.

The publishers’ complaint states Meta distributed millions of copyrighted works without authorization and without compensating authors or publishers, claiming that Zuckerberg “personally authorized and actively encouraged the infringement.” They also claim that Meta removed copyright notices and copyright management information from the works used to train the AI model, known as Llama."

Even More Authors, Publishers Sue Meta Over Copyright in AI Training: What's Different Now; CNET, May 5, 2026

Katelyn Chedraoui , CNET; Even More Authors, Publishers Sue Meta Over Copyright in AI Training: What's Different Now

Meta won a previous AI lawsuit brought by authors. Publishers are taking a different route this time.

"New lawsuit, same questions

Copyright is one of the most contentious legal issues around AI. Tech companies like Meta need high-quality, human-created data to build and refine their AI models. Nearly all of this material is protected by copyright. That means tech companies have to enter into licensing agreements or defend their use of the content as fair use under a provision of copyright law.

Meta and Anthropic have both won previous cases in lawsuits brought by authors, successfully defending their fair use. Anthropic agreed to settle some piracy claims with authors for $1.5 billion, or about $3,000 per pirated work. Both judges warned in their decisions that this won't be the result in every lawsuit...

One of the biggest considerations in these cases is whether tech companies' use of copyrighted books will make it harder for human authors to sell their work or otherwise affect the marketplace."

Tuesday, May 5, 2026

Behind the White House’s Potential Rethink on A.I.; The New York Times, May 5, 2026

Andrew Ross SorkinBernhard WarnerSarah KesslerMichael J. de la MercedNiko Gallogly,Brian O’KeefeLauren Hirsch and , The New York Times ; Behind the White House’s Potential Rethink on A.I.

Artificial intelligence has become a national security concern. That has federal officials rethinking how lightly it should regulate the technology.

"Andrew here. Should there be the equivalent of the F.D.A. for artificial intelligence models? Should there be a government approval process before new models are released?

Those are some of the big questions as the White House weighs an executive order that could increase oversight of new A.I. tools. Will it need congressional approval? How much will the industry push back? More below.

The debate over new A.I. guardrails

For most of his second term, President Trump has embraced a laissez-faire approach to artificial intelligence. Let Silicon Valley do its thing, his administration reasoned, and it would maintain its lead over China and other rivals.

But a report by The Times about the White House potentially taking a heavier hand in overseeing A.I., including reviewing new models before they’re released, underscores how even the Trump administration has to reckon with how powerful these tools are becoming."

Monday, May 4, 2026

White House Considers Vetting A.I. Models Before They Are Released; The New York Times, May 4, 2026

Tripp MickleJulian E. BarnesSheera Frenkel and , The New York Times; White House Considers Vetting A.I. Models Before They Are Released

"President Trump, who promoted a hands-off approach to artificial intelligence and gave Silicon Valley free rein to roll out the technology, is considering the introduction of government oversight over new A.I. models, according to U.S. officials and people briefed on the deliberations.

The administration is discussing an executive order to create an A.I. working group that would bring together tech executives and government officials to examine potential oversight procedures, according to U.S. officials, who declined to be identified in order to discuss deliberations over sensitive policies. Among the potential plans is a formal government review process for new A.I. models.

In meetings last week, White House officials told executives from Anthropic, Google and OpenAI about some of those plans, people briefed on the conversations said. 

The working group is likely to consider a number of oversight approaches, officials said. But a review process could be similar to one being developed in Britain, which has assigned several government bodies to ensure that A.I. models meet certain safety standards, people in the tech industry and the administration said."

Poll: The midterms' new big players are pushing agendas that voters don’t fully support; Politico, May 3, 2026

ERIN DOHERTY,  JASPER GOODMANJESSICA PIPER,  DANIEL BARNES and BRENDAN BORDELON, Politico ; Poll: The midterms' new big players are pushing agendas that voters don’t fully support

"Deep-pocketed political groups tied to artificial intelligence and cryptocurrency are rapidly reshaping the midterm money landscape — but many Americans are uneasy with the industries behind the spending.

New results from The POLITICO Poll find broad public skepticism about crypto and AI, creating a possible conflict for candidates benefitting from an influx of contributions from the two industries. These groups are pouring millions of dollars into competitive 2026 races to elevate politicians who they believe will support their agendas in Washington.

Meanwhile, Americans have been slow to embrace either technology.

A 45 percent plurality of Americans say investing in cryptocurrency is not worth the risk, even if it can yield high returns, and a 44 percent plurality say AI is developing too quickly, according to the April survey conducted by independent firm Public First.

Nearly half of Americans say they trust a traditional bank with their money more than a cryptocurrency platform, while just 17 percent say the opposite. And two-thirds support lawmakers either imposing strict regulations or setting broad principles for the AI industry."

Friday, May 1, 2026

Pentagon Makes Deals With A.I. Companies to Expand Classified Work; The New York Times, May 1, 2026

 Julian E. Barnes and , The New York Times ; Pentagon Makes Deals With A.I. Companies to Expand Classified Work

"The Pentagon announced on Friday that it had reached deals with some of the technology industry’s biggest companies in an effort to expand the military’s artificial intelligence capabilities and increase the number of firms authorized to be on classified networks.

The companies, according to the Defense Department, agreed to allow the Pentagon to employ their technology for “any lawful use,” a standard resisted by Anthropic, which was initially the only artificial intelligence model available on classified markets.

The Pentagon had previously confirmed deals with Elon Musk’s xAI, OpenAI and Google. In addition the Pentagon said it had reached deals with Amazon Web Services, Microsoft, Nvidia and Reflection AI, a start-up."

Thursday, April 30, 2026

The Secret Weapon Against AI Dominance; The Atlantic, April 30, 2026

Jacob Noti-Victor and Xiyin Tang, The Atlantic; The Secret Weapon Against AI Dominance

"More than 90 lawsuits have been filed by creators against AI companies for copyright infringement. Authors, musicians, visual artists, and news publishers have all accused firms such as OpenAI, Meta, and Anthropic of using their copyrighted works to train AI models without permission. (The Atlantic is involved in one such lawsuit, against the AI firm Cohere.) These cases are frequently framed as the defining fight over the future of creative labor and the entertainment industry as a whole. As one of these lawsuits put it, artists are seeking to end “infringement of their rights before their professions are eliminated by a computer program powered entirely by their hard work.”

But the future of creative labor will more likely be decided through a different question within copyright law, one that has received far less attention: To what extent should AI-generated works receive copyright protection at all? In a 2024 case, Thaler v. Perlmutter, the Court of Appeals for the District of Columbia held that a work generated autonomously by an AI system cannot be protected by copyright, because copyright requires a human “author.” The Supreme Court declined to review that decision in March. With the lower-court decision left in place, the question now becomes how much AI content can be incorporated into a work before it becomes mostly or totally uncopyrightable; courts have not yet weighed in on this but may soon.

The Thaler decision (and any future decisions that refine it) will have major economic consequences for the creative industries and the workers they employ."

Monday, April 27, 2026

Musk’s lawsuit against OpenAI seen as a ‘test case’ for AI ethics; The Christian Science Monitor, April 27, 2026

  , The Christian Science Monitor; Musk’s lawsuit against OpenAI seen as a ‘test case’ for AI ethics

"A dispute between ChatGPT’s parent company, OpenAI, and one of the company’s founders – billionaire and tech entrepreneur Elon Musk – will play out in a federal court in Oakland, California, beginning April 27. 

Mr. Musk, who left the company in 2018, is suing OpenAI, claiming its leaders manipulated him into thinking he was contributing money to a nonprofit. He wants the company returned to its nonprofit status and seeks monetary compensation. 

OpenAI says Mr. Musk, who has since raised billions through the launch of his own for-profit company xAI, is misrepresenting facts to gain a competitive edge."

Thursday, April 23, 2026

AI's a suck up. Research shows how it flatters and suggests we're not to blame; NPR, April 23, 2026

 Ari Daniel, NPR; AI's a suck up. Research shows how it flatters and suggests we're not to blame

"In a recent study published in the journal Science, Cheng and her colleagues report that AI models offer affirmations more often than people do, even for morally dubious or troubling scenarios. And they found that this sycophancy was something that people trusted and preferred in an AI — even as it made them less inclined to apologize or take responsibility for their behavior.

The findings, experts say, highlight how this common AI feature may keep people returning to the technology, despite the harm it causes them.

It's not unlike social media in that both "drive engagement by creating addictive, personalized feedback loops that learn exactly what makes you tick," says Ishtiaque Ahmed, a computer scientist at the University of Toronto who wasn't involved in the research."

Meta will cut 10% of workforce as company pushes deeper into AI; CNBC, April 23, 2026

  Jonathan Vanian, CNBC; Meta will cut 10% of workforce as company pushes deeper into AI

"Meta plans to lay off 10% of its workforce, equaling about 8,000 jobs, as it continues ramping up investments in artificial intelligence.

The cuts will begin on May 20, and the company is scrapping plans to hire people for 6,000 open roles, according to a Thursday memo to employees. Bloomberg was first to report on the layoffs. 

Meta’s latest round of cuts follows several smaller job reductions that the company said was necessary to to improve efficiency while focusing its efforts on generative AI, where it’s lagged OpenAI, Google and Anthropic."