Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Saturday, May 9, 2026

Tech is turning increasingly to religion in a quest to create ethical AI; AP, May 7, 2026

 KRYSTA FAURIA, AP; Tech is turning increasingly to religion in a quest to create ethical AI

"As concerns mount over artificial intelligence and its rapid integration into society, tech companies are increasingly turning to faith leaders for guidance on how to shape the technology — a surprising about-face on Silicon Valley’s longstanding skepticism of organized religion.

Leaders from various religious groups met last week with representatives from companies including Anthropic and OpenAI for the inaugural “Faith-AI Covenant” roundtable in New York to discuss how best to infuse morality and ethics into the fast-developing technology. It was organized by the Geneva-based Interfaith Alliance for Safer Communities, which seeks to take on issues such as extremism, radicalization and human trafficking. The roundtable is expected to be the first of several around the globe, including in Beijing, Nairobi and Abu Dhabi.

Tech executives need to recognize their power — and their responsibility — to make the right decisions, said Baroness Joanna Shields, a key partner in the initiative. She worked as a tech executive with stints at Google and Facebook before pivoting to British politics.

“Regulation can’t keep up with this,” she said. But the leaders of the world’s religions, with billions of followers globally, have the “expertise of shepherding people’s moral safety,” she reasoned. Faith leaders ought to have a voice, Shields said.

“This dialogue, this direct connection is so important because the people who are building this understand the power and capabilities of what they’re building and they want to do it right — most of them,” she said of AI tech executives.

The goal of this initiative, according to Shields, is an eventual “set of norms or principles” informed by different groups and faiths, from Christians to Sikhs to Buddhists, that companies will abide by...

Present at the meeting were a variety of faith groups, including representatives from the Hindu Temple Society of North America, the Baha’i International Community, The Sikh Coalition, the Greek Orthodox Archdiocese of America and The Church of Jesus Christ of Latter-day Saints, widely known as the Mormon church...

The partnership highlights a growing coalition between faith and tech, born out of an effort to create moral AI — a contested concept which begs questions about whether that is possible and what it means...

“There’s some aspect of PR to it. The slogan was ‘Move fast and break things.’ And they broke too many things and too many people,” said Brian Boyd, the U.S. faith liaison for the nonprofit Future of Life Institute...

But other advocates for AI regulation and safety aren’t so sure these efforts are genuine.

“At best it’s a distraction. At worst it’s diverting attention from things that really matter,” said Rumman Chowdhury, the CEO of the nonprofit Humane Intelligence and the U.S. science envoy for AI under the Biden administration.

Chowdhury says she’s not inclined to believe religion is the best place to help answer questions surrounding AI and ethics, but thinks she understands why companies are increasingly turning to it.

“I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” she said. “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”"

Friday, May 8, 2026

Meta’s AI Copyright Fight Just Escalated and Hollywood Is Watching Closely; Los Angeles Magazine, May 7, 2026

 , Los Angeles Magazine; Meta’s AI Copyright Fight Just Escalated and Hollywood Is Watching Closely

A new lawsuit against Mark Zuckerberg and Meta could reshape how studios, publishers and tech companies train the next generation of artificial intelligence

"The AI Gold Rush Is Running Into Copyright Law

According to the lawsuit filed in Manhattan federal court, Meta allegedly pulled material from massive libraries of pirated books and scraped internet content to train Llama, the company’s flagship large language model. Publishers argue the practice amounts to one of the largest copyright violations in modern history."

Thursday, May 7, 2026

Scott Turow's latest real-life legal thriller: Suing Meta for copyright infringement; NPR, May 5, 2026

, NPR ; Scott Turow's latest real-life legal thriller: Suing Meta for copyright infringement

""All Americans should understand that the bold future promised by A.I., has been, to paraphrase the investigative writer Alex Reisner, created with stolen words," said Turow in a statement to NPR. "It is all the more shameful that these violations of the law were undertaken by one of the richest corporations in the world."

According to the complaint, Meta "briefly considered licensing deals with major publishers" but changed its strategy in April 2023. The question of whether to license or pirate moving forward was "escalated" to Zuckerberg, after which, the complaint alleges, Meta's business development team received verbal instructions to stop licensing efforts. "If we license once [sic] single book, we won't be able to lean into the fair use strategy," a Meta employee is quoted as saying in the complaint.

"It's the most flagrant copyright breach in history," said Authors Guild CEO Mary Rasenberger in a statement to NPR. "And these voracious tech companies need to be held accountable.""

Wednesday, May 6, 2026

Publishers sue Meta, claiming it violated copyrights in training AI with their books; The Washington Post, May 5, 2026

, The Washington Post; Publishers sue Meta, claiming it violated copyrights in training AI with their books

"The case, filed in the U.S. District Court for the Southern District of New York, is the latest in a string of lawsuits brought by publishers, authors, artists, photographers and news outlets aimed at forcing tech companies to compensate them for using their works to train their AI models. The plaintiffs argue in the lawsuit that the AI model’s ability to quickly produce knockoffs and summaries of copyrighted books threatens the livelihoods of publishers and authors.

A Meta spokesperson said in a statement that the company would “fight this lawsuit aggressively.”

“AI is powering transformative innovations, productivity and creativity for individuals and companies, and courts have rightly found that training AI on copyrighted material can qualify as fair use,” the spokesperson said.

The publishers’ complaint states Meta distributed millions of copyrighted works without authorization and without compensating authors or publishers, claiming that Zuckerberg “personally authorized and actively encouraged the infringement.” They also claim that Meta removed copyright notices and copyright management information from the works used to train the AI model, known as Llama."

Even More Authors, Publishers Sue Meta Over Copyright in AI Training: What's Different Now; CNET, May 5, 2026

Katelyn Chedraoui , CNET; Even More Authors, Publishers Sue Meta Over Copyright in AI Training: What's Different Now

Meta won a previous AI lawsuit brought by authors. Publishers are taking a different route this time.

"New lawsuit, same questions

Copyright is one of the most contentious legal issues around AI. Tech companies like Meta need high-quality, human-created data to build and refine their AI models. Nearly all of this material is protected by copyright. That means tech companies have to enter into licensing agreements or defend their use of the content as fair use under a provision of copyright law.

Meta and Anthropic have both won previous cases in lawsuits brought by authors, successfully defending their fair use. Anthropic agreed to settle some piracy claims with authors for $1.5 billion, or about $3,000 per pirated work. Both judges warned in their decisions that this won't be the result in every lawsuit...

One of the biggest considerations in these cases is whether tech companies' use of copyrighted books will make it harder for human authors to sell their work or otherwise affect the marketplace."

Tuesday, May 5, 2026

Behind the White House’s Potential Rethink on A.I.; The New York Times, May 5, 2026

Andrew Ross SorkinBernhard WarnerSarah KesslerMichael J. de la MercedNiko Gallogly,Brian O’KeefeLauren Hirsch and , The New York Times ; Behind the White House’s Potential Rethink on A.I.

Artificial intelligence has become a national security concern. That has federal officials rethinking how lightly it should regulate the technology.

"Andrew here. Should there be the equivalent of the F.D.A. for artificial intelligence models? Should there be a government approval process before new models are released?

Those are some of the big questions as the White House weighs an executive order that could increase oversight of new A.I. tools. Will it need congressional approval? How much will the industry push back? More below.

The debate over new A.I. guardrails

For most of his second term, President Trump has embraced a laissez-faire approach to artificial intelligence. Let Silicon Valley do its thing, his administration reasoned, and it would maintain its lead over China and other rivals.

But a report by The Times about the White House potentially taking a heavier hand in overseeing A.I., including reviewing new models before they’re released, underscores how even the Trump administration has to reckon with how powerful these tools are becoming."

Monday, May 4, 2026

White House Considers Vetting A.I. Models Before They Are Released; The New York Times, May 4, 2026

Tripp MickleJulian E. BarnesSheera Frenkel and , The New York Times; White House Considers Vetting A.I. Models Before They Are Released

"President Trump, who promoted a hands-off approach to artificial intelligence and gave Silicon Valley free rein to roll out the technology, is considering the introduction of government oversight over new A.I. models, according to U.S. officials and people briefed on the deliberations.

The administration is discussing an executive order to create an A.I. working group that would bring together tech executives and government officials to examine potential oversight procedures, according to U.S. officials, who declined to be identified in order to discuss deliberations over sensitive policies. Among the potential plans is a formal government review process for new A.I. models.

In meetings last week, White House officials told executives from Anthropic, Google and OpenAI about some of those plans, people briefed on the conversations said. 

The working group is likely to consider a number of oversight approaches, officials said. But a review process could be similar to one being developed in Britain, which has assigned several government bodies to ensure that A.I. models meet certain safety standards, people in the tech industry and the administration said."

Poll: The midterms' new big players are pushing agendas that voters don’t fully support; Politico, May 3, 2026

ERIN DOHERTY,  JASPER GOODMANJESSICA PIPER,  DANIEL BARNES and BRENDAN BORDELON, Politico ; Poll: The midterms' new big players are pushing agendas that voters don’t fully support

"Deep-pocketed political groups tied to artificial intelligence and cryptocurrency are rapidly reshaping the midterm money landscape — but many Americans are uneasy with the industries behind the spending.

New results from The POLITICO Poll find broad public skepticism about crypto and AI, creating a possible conflict for candidates benefitting from an influx of contributions from the two industries. These groups are pouring millions of dollars into competitive 2026 races to elevate politicians who they believe will support their agendas in Washington.

Meanwhile, Americans have been slow to embrace either technology.

A 45 percent plurality of Americans say investing in cryptocurrency is not worth the risk, even if it can yield high returns, and a 44 percent plurality say AI is developing too quickly, according to the April survey conducted by independent firm Public First.

Nearly half of Americans say they trust a traditional bank with their money more than a cryptocurrency platform, while just 17 percent say the opposite. And two-thirds support lawmakers either imposing strict regulations or setting broad principles for the AI industry."

Friday, May 1, 2026

Pentagon Makes Deals With A.I. Companies to Expand Classified Work; The New York Times, May 1, 2026

 Julian E. Barnes and , The New York Times ; Pentagon Makes Deals With A.I. Companies to Expand Classified Work

"The Pentagon announced on Friday that it had reached deals with some of the technology industry’s biggest companies in an effort to expand the military’s artificial intelligence capabilities and increase the number of firms authorized to be on classified networks.

The companies, according to the Defense Department, agreed to allow the Pentagon to employ their technology for “any lawful use,” a standard resisted by Anthropic, which was initially the only artificial intelligence model available on classified markets.

The Pentagon had previously confirmed deals with Elon Musk’s xAI, OpenAI and Google. In addition the Pentagon said it had reached deals with Amazon Web Services, Microsoft, Nvidia and Reflection AI, a start-up."

Thursday, April 30, 2026

The Secret Weapon Against AI Dominance; The Atlantic, April 30, 2026

Jacob Noti-Victor and Xiyin Tang, The Atlantic; The Secret Weapon Against AI Dominance

"More than 90 lawsuits have been filed by creators against AI companies for copyright infringement. Authors, musicians, visual artists, and news publishers have all accused firms such as OpenAI, Meta, and Anthropic of using their copyrighted works to train AI models without permission. (The Atlantic is involved in one such lawsuit, against the AI firm Cohere.) These cases are frequently framed as the defining fight over the future of creative labor and the entertainment industry as a whole. As one of these lawsuits put it, artists are seeking to end “infringement of their rights before their professions are eliminated by a computer program powered entirely by their hard work.”

But the future of creative labor will more likely be decided through a different question within copyright law, one that has received far less attention: To what extent should AI-generated works receive copyright protection at all? In a 2024 case, Thaler v. Perlmutter, the Court of Appeals for the District of Columbia held that a work generated autonomously by an AI system cannot be protected by copyright, because copyright requires a human “author.” The Supreme Court declined to review that decision in March. With the lower-court decision left in place, the question now becomes how much AI content can be incorporated into a work before it becomes mostly or totally uncopyrightable; courts have not yet weighed in on this but may soon.

The Thaler decision (and any future decisions that refine it) will have major economic consequences for the creative industries and the workers they employ."

Monday, April 27, 2026

Musk’s lawsuit against OpenAI seen as a ‘test case’ for AI ethics; The Christian Science Monitor, April 27, 2026

  , The Christian Science Monitor; Musk’s lawsuit against OpenAI seen as a ‘test case’ for AI ethics

"A dispute between ChatGPT’s parent company, OpenAI, and one of the company’s founders – billionaire and tech entrepreneur Elon Musk – will play out in a federal court in Oakland, California, beginning April 27. 

Mr. Musk, who left the company in 2018, is suing OpenAI, claiming its leaders manipulated him into thinking he was contributing money to a nonprofit. He wants the company returned to its nonprofit status and seeks monetary compensation. 

OpenAI says Mr. Musk, who has since raised billions through the launch of his own for-profit company xAI, is misrepresenting facts to gain a competitive edge."

Thursday, April 23, 2026

AI's a suck up. Research shows how it flatters and suggests we're not to blame; NPR, April 23, 2026

 Ari Daniel, NPR; AI's a suck up. Research shows how it flatters and suggests we're not to blame

"In a recent study published in the journal Science, Cheng and her colleagues report that AI models offer affirmations more often than people do, even for morally dubious or troubling scenarios. And they found that this sycophancy was something that people trusted and preferred in an AI — even as it made them less inclined to apologize or take responsibility for their behavior.

The findings, experts say, highlight how this common AI feature may keep people returning to the technology, despite the harm it causes them.

It's not unlike social media in that both "drive engagement by creating addictive, personalized feedback loops that learn exactly what makes you tick," says Ishtiaque Ahmed, a computer scientist at the University of Toronto who wasn't involved in the research."

Meta will cut 10% of workforce as company pushes deeper into AI; CNBC, April 23, 2026

  Jonathan Vanian, CNBC; Meta will cut 10% of workforce as company pushes deeper into AI

"Meta plans to lay off 10% of its workforce, equaling about 8,000 jobs, as it continues ramping up investments in artificial intelligence.

The cuts will begin on May 20, and the company is scrapping plans to hire people for 6,000 open roles, according to a Thursday memo to employees. Bloomberg was first to report on the layoffs. 

Meta’s latest round of cuts follows several smaller job reductions that the company said was necessary to to improve efficiency while focusing its efforts on generative AI, where it’s lagged OpenAI, Google and Anthropic."

Wednesday, April 22, 2026

When AI advice enters a murder case; Politico, April 22, 2026

Aaron Man, Politico; When AI advice enters a murder case

"Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI on Tuesday following a mass shooting at Florida State University that resulted in two deaths last year. The attorney general stated during a press conference that ChatGPT “offered significant advice” to the suspected gunman, Phoenix Ikner, based on a preliminary review by prosecutors.

“If this were a person on the other end of the screen, we would be charging them with murder,” Uthmeier said.

The prospect of OpenAI facing criminal liability raises new questions about whether developers should be held responsible for a chatbot’s potential role in such a tragedy.

Legal scholars told DFD that, compared with prior civil cases, imposing criminal liability on the company would be a much steeper uphill battle. A key challenge, according to them, would be proving OpenAI acted with criminal intent."

Tuesday, April 21, 2026

Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears; The Guardian, April 21, 2026

  and , The Guardian ; Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears

Alarm caused by posts of Alex Karp, tech firm’s CEO, championing US military dominance and of AI weapons

"The US spy tech company Palantir published a manifesto extolling the benefits of American power and implying some cultures are inferior to others – in what MPs have called “a parody of a RoboCop film” and “the ramblings of a supervillain”.

“Some cultures have produced vital advances; others remain dysfunctional and regressive,” wrote Palantir in a 22-point post on X over the weekend, which also called for an end to the “postwar neutering” of Germany and Japan...

The pronouncement is the most recent of a number of high-profile statements from Palantir and its chief executive, Alex Karp, which appear to indicate that Karp views himself as not simply the head of a software company, but a pundit with important insights into the future of civilisation."...

In an interview with CNBC in early March, Karp suggested that AI would “disrupt” the power of “highly educated, often female voters who vote mostly Democrat”,and instead empower “vocationally trained, working-class, often male, working-class voters”."

Sunday, April 19, 2026

The philosopher trying to teach ethics to AI developers; NPR, April 17, 2026

 , NPR ; The philosopher trying to teach ethics to AI developers

Thousands of authors seek share of Anthropic copyright settlement; Reuters, April 17, 2026

 , Reuters; Thousands of authors seek share of Anthropic copyright settlement

"Nearly 120,000 authors and other copyright holders are seeking a share of a $1.5 billion class-action settlement with Anthropic over the company's unauthorized use of their books in artificial-intelligence training, according to a ​filing in California federal court.

Claims have been filed for 91% of the more than 480,000 ‌works covered by the settlement, according to a court filing  in the case on Thursday.

A judge will consider whether to grant final approval to the settlement – the largest ever in a U.S. copyright case – at a hearing next month.

Anthropic was the first and ​remains the only major AI company to settle a U.S. class-action by copyright holders alleging AI ​platforms used their work without permission to train their systems."

The Tyranny of AI Everywhere; The Atlantic, April 16, 2026

 Alexandra Petri, The Atlantic ; The Tyranny of AI Everywhere

Sneakers? Why stop there?

"I had the strangest dream. I dreamed that my shoes—my comfortable, unfashionable wool shoes—were pivoting to AI. “But you’re a shoe company,” I said. “Just go out of business! Keep your dignity!”

My shoes thanked me politely for the great question and then tried to walk me off a bridge. That was how I knew that their pivot to AI was complete. From Allbirds to AIlbirds (see, that L is an I!). Maybe I’ve cracked, I said to myself. Maybe this is the piece of AI news that has finally broken my spirit for good...

I tried to sit down on a bench, but the bench company had pivoted to AI. I couldn’t sit down, but the bench did tell me that I was right about everything. My newspaper had become AI a while ago, so there was nothing to read—or, rather, there were things to read, but I could not tell whether any of them were true. I thought I would go to a museum to cheer myself up. The paintings there had pivoted to AI (pAIntings), and their subjects were all following me with their eyes, not just Mona Lisa

“There’s a place for AI,” I said. “But … not everywhere.”

“I’m sorry,” the painting said. “I didn’t want this either, but everyone is doing it!”...

“It’s fine,” my grandmother said. I was surprised to hear from her, because as far as I knew, she was dead. “I’m not dead,” she said. “I’m just pivoting to AI, like that shoe company. Nothing dies anymore. It just becomes AI.”"

Friday, April 17, 2026

AI Startups Have These Copyright Lawyers on Speed Dial; Bloomberg Law, April 16, 2026

 David Schultz , Bloomberg Law; AI Startups Have These Copyright Lawyers on Speed Dial

"Something similar connects many of the top attorneys representing the artificial intelligence industry in its most consequential battles: their resumes.

The common thread is Durie Tangri. More than 50 attorneys from the defunct Bay Area intellectual property firm are at the center of epic Silicon Valley copyright fights, just more than three years after Morrison Foerster acquired the practice...

“Tech copyright is a small world,” said Joseph Gratz, one of the alums at Morrison.

The Durie Tangri alums have benefited from the demand in tech copyright law, said Gratz, who has appeared in court defending OpenAI in almost two dozen federal lawsuits...

One of the marquee cases Durie Tangri took on was the decade-long copyright infringement suit over Google’s book digitization. Sonal Mehta, a Durie Tangri alum who is now at WilmerHale, said the boutique relished taking on matters that ventured into uncharted territory.

“We weren’t afraid to be operating in gray areas or to be looking at where the law hadn’t fully developed,” Mehta said. “We didn’t need to feel like every argument had to be something that was a cookie cutter argument that had already been made and won 20 times before.”"