Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Thursday, April 23, 2026

AI's a suck up. Research shows how it flatters and suggests we're not to blame; NPR, April 23, 2026

 Ari Daniel, NPR; AI's a suck up. Research shows how it flatters and suggests we're not to blame

"In a recent study published in the journal Science, Cheng and her colleagues report that AI models offer affirmations more often than people do, even for morally dubious or troubling scenarios. And they found that this sycophancy was something that people trusted and preferred in an AI — even as it made them less inclined to apologize or take responsibility for their behavior.

The findings, experts say, highlight how this common AI feature may keep people returning to the technology, despite the harm it causes them.

It's not unlike social media in that both "drive engagement by creating addictive, personalized feedback loops that learn exactly what makes you tick," says Ishtiaque Ahmed, a computer scientist at the University of Toronto who wasn't involved in the research."

Meta will cut 10% of workforce as company pushes deeper into AI; CNBC, April 23, 2026

  Jonathan Vanian, CNBC; Meta will cut 10% of workforce as company pushes deeper into AI

"Meta plans to lay off 10% of its workforce, equaling about 8,000 jobs, as it continues ramping up investments in artificial intelligence.

The cuts will begin on May 20, and the company is scrapping plans to hire people for 6,000 open roles, according to a Thursday memo to employees. Bloomberg was first to report on the layoffs. 

Meta’s latest round of cuts follows several smaller job reductions that the company said was necessary to to improve efficiency while focusing its efforts on generative AI, where it’s lagged OpenAI, Google and Anthropic."

Wednesday, April 22, 2026

When AI advice enters a murder case; Politico, April 22, 2026

Aaron Man, Politico; When AI advice enters a murder case

"Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI on Tuesday following a mass shooting at Florida State University that resulted in two deaths last year. The attorney general stated during a press conference that ChatGPT “offered significant advice” to the suspected gunman, Phoenix Ikner, based on a preliminary review by prosecutors.

“If this were a person on the other end of the screen, we would be charging them with murder,” Uthmeier said.

The prospect of OpenAI facing criminal liability raises new questions about whether developers should be held responsible for a chatbot’s potential role in such a tragedy.

Legal scholars told DFD that, compared with prior civil cases, imposing criminal liability on the company would be a much steeper uphill battle. A key challenge, according to them, would be proving OpenAI acted with criminal intent."

Tuesday, April 21, 2026

Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears; The Guardian, April 21, 2026

  and , The Guardian ; Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears

Alarm caused by posts of Alex Karp, tech firm’s CEO, championing US military dominance and of AI weapons

"The US spy tech company Palantir published a manifesto extolling the benefits of American power and implying some cultures are inferior to others – in what MPs have called “a parody of a RoboCop film” and “the ramblings of a supervillain”.

“Some cultures have produced vital advances; others remain dysfunctional and regressive,” wrote Palantir in a 22-point post on X over the weekend, which also called for an end to the “postwar neutering” of Germany and Japan...

The pronouncement is the most recent of a number of high-profile statements from Palantir and its chief executive, Alex Karp, which appear to indicate that Karp views himself as not simply the head of a software company, but a pundit with important insights into the future of civilisation."...

In an interview with CNBC in early March, Karp suggested that AI would “disrupt” the power of “highly educated, often female voters who vote mostly Democrat”,and instead empower “vocationally trained, working-class, often male, working-class voters”."

Sunday, April 19, 2026

The philosopher trying to teach ethics to AI developers; NPR, April 17, 2026

 , NPR ; The philosopher trying to teach ethics to AI developers

Thousands of authors seek share of Anthropic copyright settlement; Reuters, April 17, 2026

 , Reuters; Thousands of authors seek share of Anthropic copyright settlement

"Nearly 120,000 authors and other copyright holders are seeking a share of a $1.5 billion class-action settlement with Anthropic over the company's unauthorized use of their books in artificial-intelligence training, according to a ​filing in California federal court.

Claims have been filed for 91% of the more than 480,000 ‌works covered by the settlement, according to a court filing  in the case on Thursday.

A judge will consider whether to grant final approval to the settlement – the largest ever in a U.S. copyright case – at a hearing next month.

Anthropic was the first and ​remains the only major AI company to settle a U.S. class-action by copyright holders alleging AI ​platforms used their work without permission to train their systems."

The Tyranny of AI Everywhere; The Atlantic, April 16, 2026

 Alexandra Petri, The Atlantic ; The Tyranny of AI Everywhere

Sneakers? Why stop there?

"I had the strangest dream. I dreamed that my shoes—my comfortable, unfashionable wool shoes—were pivoting to AI. “But you’re a shoe company,” I said. “Just go out of business! Keep your dignity!”

My shoes thanked me politely for the great question and then tried to walk me off a bridge. That was how I knew that their pivot to AI was complete. From Allbirds to AIlbirds (see, that L is an I!). Maybe I’ve cracked, I said to myself. Maybe this is the piece of AI news that has finally broken my spirit for good...

I tried to sit down on a bench, but the bench company had pivoted to AI. I couldn’t sit down, but the bench did tell me that I was right about everything. My newspaper had become AI a while ago, so there was nothing to read—or, rather, there were things to read, but I could not tell whether any of them were true. I thought I would go to a museum to cheer myself up. The paintings there had pivoted to AI (pAIntings), and their subjects were all following me with their eyes, not just Mona Lisa

“There’s a place for AI,” I said. “But … not everywhere.”

“I’m sorry,” the painting said. “I didn’t want this either, but everyone is doing it!”...

“It’s fine,” my grandmother said. I was surprised to hear from her, because as far as I knew, she was dead. “I’m not dead,” she said. “I’m just pivoting to AI, like that shoe company. Nothing dies anymore. It just becomes AI.”"

Friday, April 17, 2026

AI Startups Have These Copyright Lawyers on Speed Dial; Bloomberg Law, April 16, 2026

 David Schultz , Bloomberg Law; AI Startups Have These Copyright Lawyers on Speed Dial

"Something similar connects many of the top attorneys representing the artificial intelligence industry in its most consequential battles: their resumes.

The common thread is Durie Tangri. More than 50 attorneys from the defunct Bay Area intellectual property firm are at the center of epic Silicon Valley copyright fights, just more than three years after Morrison Foerster acquired the practice...

“Tech copyright is a small world,” said Joseph Gratz, one of the alums at Morrison.

The Durie Tangri alums have benefited from the demand in tech copyright law, said Gratz, who has appeared in court defending OpenAI in almost two dozen federal lawsuits...

One of the marquee cases Durie Tangri took on was the decade-long copyright infringement suit over Google’s book digitization. Sonal Mehta, a Durie Tangri alum who is now at WilmerHale, said the boutique relished taking on matters that ventured into uncharted territory.

“We weren’t afraid to be operating in gray areas or to be looking at where the law hadn’t fully developed,” Mehta said. “We didn’t need to feel like every argument had to be something that was a cookie cutter argument that had already been made and won 20 times before.”"

AI Is Getting Smarter. Catching Its Mistakes Is Getting Harder.; The Wall Street Journal, April 14, 2026

  

Katherine Blunt , The Wall Street Journal ; AI Is Getting Smarter. Catching Its Mistakes Is Getting Harder.

As chatbots and agents grow more powerful and ubiquitous, recognizing the moments when they go rogue can be tricky


"Chad Olson was confused when his Gemini artificial-intelligence chatbot told him he had a family reunion planning session marked on his calendar."

Tuesday, April 14, 2026

Agency in the Age of AI; Time, April 14, 2026

 John Palfrey , Time; Agency in the Age of AI

"OpenAI’s recent acquisition of OpenClaw, an open-source, autonomous AI agent designed to run locally on a user’s computer, is a sign that AI agents are quickly being given more responsibilities and more access—from emails to bank accounts, a decision with unintended consequences, including deleted inboxes and Amazon Web Services outages. Peter Steinberger, the founder of OpenClaw, said he wants to “build an agent that even my mum can use.” But there is a difference between using technology to improve efficiency and giving technology agency that humans should hold. 

These developments prompt hard questions, particularly for young people who are seeking agency in their personal and professional lives. Does it make sense to train to be an actuary if AI is supposed to be good at predicting unknown outcomes based on data? Is it worth the cost today to train to be a lawyer or an accountant or pursue higher education at all when all the answers are supposedly at our fingertips? Put another way, what does agency look like in an era dominated by the spread of AI?"

Monday, April 13, 2026

Nobody is governing AI; Quartz, April 8, 2026


Jackie Snow, Quartz ; Nobody is governing AI

Artificial intelligence is advancing faster than lawmakers can regulate it, while global AI governance fragments in real time

"Artificial intelligence is now making hiring decisions, tutoring children, optimizing power grids, and targeting weapons systems. The rules governing any of that are, almost everywhere, either nonexistent, stalled in committee, or under active attack.

In the United States, the federal government has spent three years producing executive orders, frameworks, and guidelines, none of which have become law. States that tried to fill the gap have been threatened with funding cuts and lawsuits. In Europe, the most ambitious AI legislation in the world is being delayed or softened before most of it has even taken effect. The technology, meanwhile, has not paused for any of this."

OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash; Los Angeles Times, April 13, 2026

 Queenie Wong , Los Angeles Times; OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash

"Hours after a Molotov cocktail was thrown at his San Francisco home, OpenAI Chief Executive Sam Altman addressed the criticism surrounding artificial intelligence that appears to have been the impetus for the attack. 

In a lengthy blog post, Altman shared a family photo of his husband and child, stating he hopes it might convince people not to repeat the attack despite their opinions on him.

The San Francisco Police Department arrested a 20-year-old man in connection with the Friday morning attack but did not publicly comment on the motivation. Altman and his company, the maker of ChatGPT, have been at the center of a heated debate about whether AI will change the world for better or worse."

Sunday, April 12, 2026

Is AI the greatest art heist in history?; The Guardian, April 12, 2026

 , The Guardian; Is AI the greatest art heist in history? 

New technologies of reproduction are plundering the art world – and getting away with it

"In 2026, its easy to see why generative AI is bad. The internet has nicknamed its excretions “slop”. The CEOs of AI companies prance about on stage like supervillains, bragging that their products will eliminate vast swathes of work. Generative AI requires sacrificing the world’s water to feed its hideous data centres. Around the globe, chatbots induce schizophrenic delusions and urge teens to kill themselves – all while turning users brains to mush.

Who could have predicted this? Artists, that’s who...

When tech boosters want to demonise resistance, they invoke the luddites. By their telling, the luddites were primitive idiots, who smashed machines they were too stupid to understand. History though, tells a different story. As recounted by Brian Merchant’s sublime work Blood in the Machineluddites were skilled artisans, fighting for their way of life against the “satanic mills” – textile sweatshops powered by child semi-slaves. Forbidden from unionising, luddites smashed machines as a protest tactic. And they did not lose to the inevitable march of progress. They lost to physical force. The government called in troops, and the luddites were either executed or shipped to penal colonies in Australia.

Artists too are fighting for a way of life. And if we are too disorganised to triumph, that will be everyone’s loss. AI companies’ inappropriate scraping may have started with the work of illustrators like me, but it has grown to encompass everything else. It extends to the billions of dollars that these companies squander each year, to the carbon they burn, to the rare minerals in their chips, to the land on which their data centres sit, to culture, education, sanity and our very imaginations. In return for the entirety of the human and non-human world, the tech lords can only offer us dystopia. Their fantasy future contains neither meaningful work nor real communities, just robots chattering to each other, leaving nothing for us."

The most 'ethical' AI company might also be the web's biggest freeloader; Business Insider, April 12, 2026

 , Business Insider ; The most 'ethical' AI company might also be the web's biggest freeloader

"Cloudflare's latest data offers one of the clearest snapshots yet of how AI companies consume the web, and how little they give back.

The company, which powers roughly 20% of the internet, tracks how AI bots crawl websites versus how often those platforms send users back through referrals. The resulting "crawl-to-refer" ratio is a simple yet telling metric: how much value is extracted compared to returned.

The early April 2026 figures are stark. Anthropic is the worst by a wide margin, with a ratio of 8,800 to 1. That means its bots crawl webpages 8,800 times for every referral sent...

Anthropic's position is particularly striking given its reputation for being "ethical." That reputation has made it a preferred choice among some users who want to support more responsible AI development. This data highlights a different dimension of ethics — how companies interact with the broader web ecosystem that provides information for AI model outputs."

Saturday, April 11, 2026

How AI is getting better at finding security holes; NPR, April 11, 2026

, NPR; How AI is getting better at finding security holes

"In the past few months, AI models have gone from producing hallucinations to becoming effective at finding security flaws in software, according to developers who maintain widely used cyber infrastructure. Those pieces of software, among other things, power operating systems and transfer data for things connected to the internet.

While these new capabilities can help developers make software more secure, they can also be weaponized by hackers and nation states to steal information and money or disrupt critical services.

The latest development of AI's cyber capability came on Tuesday, when AI lab Anthropic announced it had developed a powerful new model the company believes could "reshape cybersecurity." It said that its latest model, Mythos Preview, was able to find "high-severity vulnerabilities, including some in every major operating system and web browser." Not only that, the model was better at coming up with ways to exploit the vulnerabilities it found, which means malicious actors can more effectively achieve their goals.

For now, the company is limiting the access to the model to around 50 select companies and organizations "in an effort to secure the world's most critical software." They're calling the collaboration Project Glasswing, naming it after a butterfly species with transparent wings.

Anthropic says the risk for misuse is so high that it has no plans to release this particular model to the general public, according to the announcement, but it will release other related models. "Our eventual goal is to enable our users to safely deploy Mythos-class models at scale," the company wrote."

Friday, April 10, 2026

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters; Wired, April 9, 2026

 MAXWELL ZEFF , Wired; OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause “critical harm.”

"OPENAI IS THROWING its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage."

Thursday, April 9, 2026

Claude Mythos Is Everyone’s Problem; The Atlantic, April 9, 2026

 Matteo Wong , The Atlantic; Claude Mythos Is Everyone’s Problem

What happens when AI can hack everything?

"These companies can or could soon have the capability to launch major cyberattacks, conduct mass surveillance, influence military operations, cause huge swings in financial and labor markets, and reorient global supply chains. In theory, nothing governs these companies other than their own morals and their investors. They are developing the power to upend nations and economies. These are the AI superpowers."

Who owns ideas in the AI age?; Fortune, April 8, 2026

 , Fortune; Who owns ideas in the AI age?; David Shelley, CEO of Hachette’s U.K. and U.S. operations, on taking on Big Tech, defending copyright, and why the future of human creativity is at stake.

"Can you ever really own an idea?"

Wednesday, April 8, 2026

Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions; CNBC, April 8, 2026

Jonathan Vanian, CNBC; Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions

"Meta is debuting its first major artificial intelligence model since the costly hiring of Scale AI’s Alexandr Wang nine months ago, as the Facebook parent aims to carve out a niche in a market that’s being dominated by OpenAI, Anthropic and Google.

Dubbed Muse Spark and originally codenamed Avocado, the AI model announced Wednesday is the first from the company’s new Muse series developed by Meta Superintelligence Labs, the AI unit that Wang oversees. Wang joined Meta in June as part of the company’s $14.3 billion investment in Scale AI, where he was CEO."

Monday, April 6, 2026

US music publishers suing Anthropic make their case against AI 'fair use'; Reuters, March 24, 2026

  , Reuters; US music publishers suing Anthropic make their case against AI 'fair use'

"Music publishers Universal Music Group , Concord and ABKCO have asked a judge in California to rule that U.S. copyright law does not insulate artificial intelligence startup Anthropic from ​liability for copying their song lyrics to train its AI-powered chatbot Claude.

The publishers' request , filed on Monday ‌in federal court in San Jose, tees up a critical question in the legal battle between creators and tech companies: Does the doctrine of "fair use" apply to the copying of millions of copyrighted works to train AI models?"