Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Saturday, February 28, 2026

If A.I. Is a Weapon, Who Should Control It?; The New York Times, February 28, 2026

 , The New York Times ; If A.I. Is a Weapon, Who Should Control It?

"We spent the Cold War worrying mostly about military folly, and A.I. entered into our anxieties even then: the Soviet Doomsday Machine in “Dr. Strangelove,” the game-playing computer in “WarGames” and of course the fateful “Terminator” decision to make Skynet operational.

But for the last few years, as A.I. advances have concentrated potentially extraordinary power in the hands of a few companies and C.E.O.s — themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears — it’s become more natural to worry more about private power and ambition, about would-be A.I. god-kings rather than presidents and generals.

Until, that is, the current collision between the Department of Defense and Anthropic, the artificial intelligence pioneer, over whether Anthropic’s A.I. models should be bound by the company’s ethical constraints or made available for all uses the Pentagon might have in mind."

Wednesday, February 25, 2026

US DoD to Anthropic: compromise AI ethics or be banished from supply chain; CIO, February 25, 2026

  , CIO; US DoD to Anthropic: compromise AI ethics or be banished from supply chain

"Defense Secretary Hegseth has threatened to compel Anthropic to give the military free rein with AI, say reports.

A growing rift between the US Department of Defense (DoD) and Anthropic over how AI can be used by the military has led to Defense Secretary Pete Hegseth issuing a blunt ultimatum: work with us on our terms or risk being banned from Pentagon programs.

According to news site Axios, Hegseth gave Anthropic until Friday, February 27 to agree to its terms during a tense meeting this week. If no agreement is reached, the company would risk being deemed a “supply chain risk,” with Hegseth even threatening to invoke the Cold War-era Defense Production Act to compel cooperation, the report said.

The DoD’s view is that it should be free to use Anthropic’s AI for “all lawful purposes,” regardless of ethical boundaries set by the company itself. Anthropic, by contrast, wants to set narrower guardrails."

Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon; CNN, February 25, 2026

 

"Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.

Instead of self-imposed guardrails constraining its development of AI models, Anthropic is adopting a nonbinding safety framework that it says can and will change.

In a blog post Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market.

The announcement is surprising, because Anthropic has described itself as the AI company with a “soul.” It also comes the same week that Anthropic is fighting a significant battle with the Pentagon over AI red lines."

Tuesday, February 24, 2026

YouTuber sues Runway AI in latest copyright class action over AI training; Reuters, February 24, 2026

 , Reuters; YouTuber sues Runway AI in latest copyright class action over AI training

"Artificial intelligence video startup Runway AI has been hit with a proposed class action lawsuit in California federal court for allegedly misusing YouTube content to train its video generation platform.

YouTube creator David Gardner said in the complaint filed in Los Angeles on Monday, that Runway bypassed YouTube's copyright protections to illegally download user videos for its AI training."

Hollywood studios escalate dispute over ByteDance’s ‘pervasive copyright infringement’ with its AI tools; Los Angeles Times, February 23, 2026

Cerys Davies, Los Angeles Times; Hollywood studios escalate dispute over ByteDance’s ‘pervasive copyright infringement’ with its AI tools

"Following the lead of several major Hollywood studios, the Motion Picture Assn. has sent its own cease-and-desist letter to ByteDance, the company behind the controversial artificial-intelligence video generator Seedance 2.0.

The trade association, which represents the interests of major film and TV studios, sent a notice to the Chinese company, reflecting its members’ collective response to “ByteDance’s pervasive copyright infringement.” MPA argues that Seedance’s unauthorized use of copyrighted materials is a “feature, not a bug.” 

The letter, sent Friday, marks the first time the MPA has forwarded a cease-and-desist to a major AI firm and represents a further escalation of tensions between the entertainment industry and an AI company."

Monday, February 23, 2026

Backed by Anthropic, a Super PAC Group Begins an Ad Blitz in Support of A.I. Regulation; The New York Times, February 23, 2026

, The New York Times ; Backed by Anthropic, a Super PAC Group Begins an Ad Blitz in Support of A.I. Regulation

The ads by Public First Action, which started airing on Monday, are part of an escalating political war over artificial intelligence before the midterm elections.

"A new ad campaign on Monday warned northern New Jersey residents that Congress could leave them vulnerable to harm by artificial intelligence.

The ad, which opens with photos of A.I.-generated women smiling on social media alongside A.I.-generated headlines, urged voters to tell their House representative to vote against a bill that would block states from creating protections against A.I. scams.

“He can make sure A.I. serves us, not the other way around,” the ad said of Josh Gottheimer, the Democratic co-chair of the House’s new A.I. commission, which is expected to heavily influence legislation on the topic. “New Jersey families come before Big Tech’s bottom line.”

The $300,000 ad campaign was paid for by Public First Action, a super PAC operation backed by the A.I. start-up Anthropic. Focused on New Jersey, the campaign is likely to run several weeks — part of several similar initiatives by the group nationally."

Thursday, February 19, 2026

Hollywood's copyright fight meets China's AI boom; Axios, February 19, 2026

Madison Mills, Sara Fischer, Axios; Hollywood's copyright fight meets China's AI boom

"The big picture: Hollywood is leaning on copyright law to rein in Seedance, but legal pressure hasn't slowed the rise of Chinese AI models more broadly."

Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants; CNBC, February 18, 2026

 Ashley Capoot, CNBC; Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants

"Anthropic wants assurance that its models will not be used for autonomous weapons or to “spy on Americans en masse,” according to a report from Axios. 

The DOD, by contrast, wants to use Anthropic’s models “for all lawful use cases” without limitation."

Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon; Fast Company, February 17, 2026

 REBECCA HEILWEIL, Fast Company; Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon

"A dispute between AI company Anthropic and the Pentagon over how the military can use the company’s technology has now gone public. Amid tense negotiations, Anthropic has reportedly called for limits on two key applications: mass surveillance and autonomous weapons. The Department of Defense, which Trump renamed the Department of War last year, wants the freedom to use the technology without those restrictions.

Caught in the middle is Palantir. The defense contractor provides the secure cloud infrastructure that allows the military to use Anthropic’s Claude model, but it has stayed quiet as tensions escalate. That’s even as the Pentagon, per Axios, threatens to designate Anthropic a “supply chain risk,” a move that could force Palantir to cut ties with one of its most important AI partners."

Pentagon threatens Anthropic punishment; Axios, February 16, 2026

 Dave Lawler, Maria Curi, Mike Allen, Axios; Pentagon threatens Anthropic punishment

"Defense Secretary Pete Hegseth is "close" to cutting business ties with Anthropic and designating the AI company a "supply chain risk" — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.

The senior official said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

Why it matters: That kind of penalty is usually reserved for foreign adversaries. 

Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."

The big picture: Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities."

Tuesday, February 17, 2026

New research from Notre Dame theologian and Vatican working group explores how to ‘reclaim human agency’ in age of AI; Notre Dame News, February 17, 2026

 Carrie Gates, Notre Dame News; New research from Notre Dame theologian and Vatican working group explores how to ‘reclaim human agency’ in age of AI

"One of the fundamental promises of artificial intelligence is that it will strengthen human agency by freeing us from mundane, repetitive tasks.

However, a new publication, co-edited by University of Notre Dame theologian Paul Scherz, argues that promise “rings hollow” in the face of efforts by technology companies to manipulate consumers — and ultimately deprive them of agency.

The book, “Reclaiming Human Agency in the Age of Artificial Intelligence,” is the second in a series created by the Vatican’s AI Research Group for the Centre for Digital Culture. Part of the Holy See’s Dicastery for Culture and Education, the group is composed of scholars from across North America who represent a range of disciplines from theology and philosophy to computer science and business.

“We wanted to examine the idea of how AI affects human actions, human freedom and the ability of people to develop virtues — which we classified under the heading of human agency,” said Scherz, the Our Lady of Guadalupe College Professor of Theology and the ND–IBM Tech Ethics Lab Program Chair. “This is such an important topic right now because one of the most hyped developments that we’re hearing about right now is ‘agentic’ AI — or AI that will take action for people.

“We think it’s important to distinguish what the differences are between these AI agents and true human agents — and how the AI we have now is affecting our actions.”

In “Reclaiming Human Agency,” Scherz, co-editor Brian Patrick Green of Santa Clara University and their fellow research group members cite potentially problematic issues with the technology, including addictive applications, “surveillance capitalism” that exploits users’ personal data for profit, widespread de-skilling in the workplace as complex tasks are handed over to AI and the growth of algorithmic governance — where social media algorithms influence what people buy, how they perceive events and even how they vote.

They also assert that human agency should not be seen in terms of “freedom from” tasks, but in “freedom for” pursuing the good, seeking excellence and purpose by building flourishing relationships with others and with God."

The economics of AI outweigh ethics for tech CEOs, business leader says; CNN, February 16, 2026

 CNN; The economics of AI outweigh ethics for tech CEOs, business leader says

"Podcast host and business leader Scott Galloway joins Dana Bash on "Inside Politics" to discuss the need for comprehensive government regulation of AI. “We have increasingly outsourced our ethics, our civic responsibility, what is good for the public to the CEOs of companies of tech," Galloway tells Bash, adding, "This is another example of how government is failing to step in and provide thoughtful, sensible regulations.” His comments come as the Pentagon confirms it's reviewing a contract with AI company Anthropic after a reported clash over the scope of AI guardrails."

Monday, February 16, 2026

ByteDance says it will add safeguards to Seedance 2.0 following Hollywood backlash; CNBC, February 16, 2026

Dylan Butts, CNBC; ByteDance says it will add safeguards to Seedance 2.0 following Hollywood backlash

"Chinese tech giant ByteDance has said it will strengthen safeguards on a new artificial intelligence video-making tool, following complaints of copyright theft from entertainment giants. 

The tool, Seedance 2.0, enables users to create realistic videos based on text prompts. However, viral videos shared online appear to show copyrighted characters and celebrity likenesses, raising intellectual property concerns in the U.S. 

“ByteDance respects intellectual property rights and we have heard the concerns regarding Seedance 2.0,” a company spokesperson said in a statement shared with CNBC."

Saturday, February 14, 2026

How Fast Can A.I. Change the Workplace?; The New York Times, February 14, 2026

 ROSS DOUTHAT, The New York Times; How Fast Can A.I. Change the Workplace?

"People need to understand the part of this argument that’s absolutely correct: It is impossible to look at the A.I. models we have now, to say nothing of what we might get in six months or a year, and say that these technological tools can’t eventually replace a lot of human jobs. The question is whether people inside the A.I. hype loop are right about how fast it could happen, and then whether it will create a fundamental change in human employment rather than just a structural reshuffle.

One obstacle to radical speed is that human society is a complex bottleneck through which even the most efficiency-maxing innovations have to pass. As long as the efficiencies offered by A.I. are mediated by human workers, there will be false starts and misadaptations and blind alleys that make pre-emptive layoffs reckless or unwise.

Even if firings make sense as a pure value proposition, employment in an advanced economy reflects a complex set of contractual, social, legal and bureaucratic relationships, not just a simple productivity-maximizing equation. So many companies might delay any mass replacement for reasons of internal morale or external politics or union rules, and adapt to A.I.’s new capacities through reduced hiring and slow attrition instead.

I suspect the A.I. insiders underestimate the power of these frictions, as they may underestimate how structural hurdles could slow the adoption of any cure or tech that their models might discover. Which would imply a longer adaptation period for companies, polities and humans.

Then, after this adaptation happens, and A.I. agents are deeply integrated into the work force, there are two good reasons to think that most people will still be doing gainful work. The first is the entire history of technological change: Every great innovation has yielded fears of mass unemployment and, every time we’ve found our way to new professions, new demands for human labor that weren’t imaginable before.

The second is the reality that people clearly like a human touch, even in situations where we can already automate it away. The economist Adam Ozimek has a good rundown of examples: Player pianos have not done away with piano players, self-checkout has not eliminated the profession of cashier and millions of waiters remain in service in the United States because an automated restaurant experience seems inhuman."

Friday, February 13, 2026

A.I. Companies Are Eating Higher Education; The New York Times, February 12, 2026

Matthew Connelly, The New York Time; A.I. Companies Are Eating Higher Education

"Young people are quickly becoming so dependent on A.I. that they are losing the ability to think for themselves. And rather than rallying resistance, academic administrators are aiding and abetting a hostile takeover of higher education...

It is still too early to know how A.I. usage affects young people’s ability to learn. But research suggests that students using A.I. do not read as carefully when doing research and that they write with diminished accuracy and originality. Students do not even realize what they are missing. But educators and employers know. Reading closely, thinking critically and writing with logic and evidence are precisely the skills people need to realize the bona fide potential of A.I. to support lifelong learning."

Wednesday, February 11, 2026

Adam Schiff And John Curtis Introduce Bill To Require Tech To Disclose Copyrighted Works Used In AI Training Models; Deadline, February 10, 2026

Ted Johnson, Deadline; Adam Schiff And John Curtis Introduce Bill To Require Tech To Disclose Copyrighted Works Used In AI Training Models

"Sen. Adam Schiff (D-CA) and Sen. John Curtis (R-UT) are introducing a bill that touches on one of the hottest Hollywood-tech debates in the development of AI: The use of copyrighted works in training models.

The Copyright Labeling and Ethical AI Reporting Act would require companies file a notice with the Register of Copyrights that detail the copyrighted works used to train datasets for an AI model. The notice would have to be filed before a new model is publicly released, and would apply retroactively to models already available to consumers.

The Copyright Office also would be required to establish a public database of the notices filed. There also would be civil penalties for failure to disclose the works used."

OpenAI Is Making the Mistakes Facebook Made. I Quit.; The New York Times, February 11, 2026

 Zoë Hitzig , The New York Times; OpenAI Is Making the Mistakes Facebook Made. I Quit.

"This week, OpenAI started testing ads on ChatGPT. I also resigned from the company after spending two years as a researcher helping to shape how A.I. models were built and priced, and guiding early safety policies before standards were set in stone.

I once believed I could help the people building A.I. get ahead of the problems it would create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.

I don’t believe ads are immoral or unethical. A.I. is expensive to run, and ads can be a critical source of revenue. But I have deep reservations about OpenAI’s strategy."

Tuesday, February 10, 2026

No, the human-robot singularity isn’t here. But we must take action to govern AI; The Guardian, February 10, 2026

  , The Guardian; No, the human-robot singularity isn’t here. But we must take action to govern AI

"Based upon my years of research on bots, AI and computational propaganda, I can tell you two things with near certainty. First, Moltbook is nothing new. Humans have built bots that can talk to one another – and to humans – for decades. They’ve been designed to make outlandish, even frightening, claims throughout this time. Second, the singularity is not here. Nor is AGI. According to most researchers, neither is remotely close. AI’s advancement is limited by a number of very tangible factors: mathematics, data access and business costs among them. Claims that AGI or the singularity have arrived are not grounded in empirical research or science.

But as tech companies breathlessly promote their AI capabilities another thing is also clear: big tech is now far from being the countervailing force it was during the first Trump administration. The overblown claims emanating from Silicon Valley about AI have become intertwined with the nationalism of the US government as the two work together in a bid to “win” the AI race. Meanwhile, ICE is paying Palantir $30m to provide AI-enabled software that may be used for government surveillance. Musk and other tech executives continue to champion far-right causes. Google and Apple also removed apps people were using to track ICE from their digital storefronts after political pressure.

Even if we don’t yet have to worry about the singularity, we do need to fight back against this marriage of convenience caused by big tech’s quest for higher valuations and Washington’s desire for control. When tech and politicians are in lockstep, constituents will need to use their power to decide what will happen with AI."

Friday, February 6, 2026

Publishers Strike Back Against Google in Infringement Suit; Publishers Weekly, February 6, 2026

Jim Milliot , Publishers Weekly; Publishers Strike Back Against Google in Infringement Suit

"The Association of American Publishers continued its fight this week to allow two of its members, Hachette Book Group and Cengage, to join a class action copyright infringement lawsuit against Google and its generative AI product Gemini. The lawsuit was first brought by a group of illustrators and writers in 2023.

In mid-January the AAP filed its first motion to allow the two publishers to take part in the lawsuit that is now before Judge Eumi K. Lee in the U.S. District Court for the Northern District of California. Earlier this week the AAP filed its reply to Google’s motion asking the court to block AAP’s request.

At the core of Google’s argument is the notion that the publishers should have asked to intervene sooner, as well as the assertion that publishers have no interest in the case because they don’t own authors works.

In its response, AAP argues that it was only when the case reached class certification that the publishers’ interests became clear. The new filing also rebuts Google’s other claim that publishers’ don’t own any rights.

“Google’s professed misunderstanding of ownership exemplifies exactly the kind of value that Proposed Intervenors bring to the case,” the AAP stated, arguing that both HBG and Cengage own certain rights to the works in question and that “scores” of other publishers will be impacted by the litigation."

Young people in China have a new alternative to marriage and babies: AI pets; The Washington Post, February 6, 2026

 

, The Washington Post; Young people in China have a new alternative to marriage and babies: AI pets

"While China and the United States vie for supremacy in the artificial intelligence race, China is pulling ahead when it comes to finding ways to apply AI tools to everyday uses — from administering local government and streamlining police work to warding off loneliness. People falling in love with chatbots has captured headlines in the U.S., and the AI pet craze in China adds a new, furry dimension to the evolving human relationship with AI."