Showing posts with label DoD. Show all posts
Showing posts with label DoD. Show all posts

Tuesday, March 3, 2026

The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’; The Conversation, March 1, 2026

 Lecturer, International Relations, Deakin University, The Conversation ; The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’


"In the leadup to the weekend’s US and Israeli attacks on Iran, the US Department of Defense was locked in tense negotiations with artificial intelligence (AI) company Anthropic over exactly how the Pentagon could use the firm’s technology.

Anthropic wanted guarantees its Claude systems would not be used for purposes such as domestic surveillance in the US and operating autonomous weapons without human control. 

In response, US president Donald Trump on Friday directed all US federal agencies to cease using Anthropic’s technology, saying he would “never allow a radical left, woke company to dictate how our great military fights and wins wars!”

Hours later, rival AI lab OpenAI (maker of ChatGPT) announced it had struck its own deal with the Department of Defense. The key difference appears to be that OpenAI permits “all lawful uses” of its tools, without specifying ethical lines OpenAI won’t cross.

What does this mean for military AI? Is it the end for the idea of “ethical AI” in warfare?"

Monday, March 2, 2026

'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military; TechRadar,March 1, 2026

  , TechRadar ; 'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military

"After Claude developer Anthropic walked away from a deal with the US Department of War over safety and security concerns, OpenAI has decided to sign an agreement with the military – and ChatGPT users are far from happy about it.

As reported by Windows Central, a growing number of people are canceling their ChatGPT subscriptions and switching to other AI chatbots instead, including Claude. A quick browse of social media or Reddit is enough to see that there's a growing backlash to the move.

Some Redditors are posting guides to extracting yourself and your data from ChatGPT, while others are accusing OpenAI of having "no ethics at all" and "selling their soul" by agreeing to allow their AI models to be used by the US military complex."

Sunday, March 1, 2026

OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns; The Guardian, February 28, 2026

  and , The Guardian; OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns

CEO Sam Altman claims military will not use AI product for autonomous killing systems or mass surveillance

"OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks, hours after Donald Trump ordered the government to stop using the services of one of the company’s main competitors.

Sam Altman, OpenAI’s CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance – nor for autonomous weapons systems that can kill people without human input.

Announcing the deal, Altman insisted that OpenAI’s agreement with the government included assurances that it would not be used to those ends.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. He added that the Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement”.

Altman also said he hoped the Pentagon would “offer these same terms to all AI companies” as a way to “de-escalate away from legal and governmental actions and toward reasonable agreements”."

Saturday, February 28, 2026

If A.I. Is a Weapon, Who Should Control It?; The New York Times, February 28, 2026

 , The New York Times ; If A.I. Is a Weapon, Who Should Control It?

"We spent the Cold War worrying mostly about military folly, and A.I. entered into our anxieties even then: the Soviet Doomsday Machine in “Dr. Strangelove,” the game-playing computer in “WarGames” and of course the fateful “Terminator” decision to make Skynet operational.

But for the last few years, as A.I. advances have concentrated potentially extraordinary power in the hands of a few companies and C.E.O.s — themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears — it’s become more natural to worry more about private power and ambition, about would-be A.I. god-kings rather than presidents and generals.

Until, that is, the current collision between the Department of Defense and Anthropic, the artificial intelligence pioneer, over whether Anthropic’s A.I. models should be bound by the company’s ethical constraints or made available for all uses the Pentagon might have in mind."

OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash; The New York Times, February 27, 2026

  , The New York Times; OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash

"OpenAI, the maker of ChatGPT, said on Friday that it had reached an agreement with the Pentagon to provide its artificial intelligence technologies for classified systems, just hours after President Trump ordered federal agencies to stop using A.I. technology made by rival Anthropic.

Under the deal, OpenAI agreed to let the Pentagon use its A.I. systems for any lawful purpose, a term required by the Pentagon. But OpenAI also said it had found a way to ensure that its technologies would adhere to its safety principles by installing specific technical guardrails on its systems."

Friday, February 27, 2026

Trump Orders Government to Stop Using Anthropic After Pentagon Standoff; The New York Times, February 27, 2026

Julian E. Barnes and  , The New York Times; Trump Orders Government to Stop Using Anthropic After Pentagon Standoff

"President Trump on Friday ordered all federal agencies to stop using artificial intelligence technology made by Anthropic, a directive that could vastly complicate government intelligence analysis and defense work.

Writing on Truth Social, Mr. Trump used harsh words for Anthropic, describing it as a “radical Left AI company run by people who have no idea what the real World is all about.”

Shortly after Mr. Trump’s announcement, and 13 minutes after a Pentagon deadline, Defense Secretary Pete Hegseth designatedthe company a “supply-chain risk to national security.” The label means that no contractor or supplier that works with the military can do business with Anthropic.

The move is all but unheard-of, legal experts said. It strips an American company of its government work by using a process previously deployed only with foreign companies the United States considered security risks."

Pentagon Standoff Is a Decisive Moment for How A.I. Will Be Used in War; The New York Times, February 27, 2026

 Adam SatarianoJulian E. Barnes and  , The New York Times; Pentagon Standoff Is a Decisive Moment for How A.I. Will Be Used in War

The Pentagon’s contract dispute with Anthropic is part of a wider clash about the use of artificial intelligence for national security and who decides on any safeguards.

"The fight between the Department of Defense and the artificial intelligence company Anthropic has ostensibly been about a $200 million contract over the use of A.I. in classified systems.

But as the two sides careen toward a 5:01 p.m. Friday deadlineover terms of the contract, far more is at stake.

Amid the legalese and heated rhetoric are questions being asked globally about how to use A.I., what the technology’s risks are and who gets to decide on setting any limits — the makers of A.I. or national governments.

Underlying it all is fear and awe over the dizzying pace of A.I. progress and the technology’s uncertain impact on society."

Pentagon Attacks Anthropic Chief as Deadline Looms in Standoff; The New York Times, February 27, 2026

Julian E. Barnes and , The New York Times ; Pentagon Attacks Anthropic Chief as Deadline Looms in Standoff

The A.I. firm had rejected military officials’ latest offer. Anthropic has until 5:01 p.m. on Friday to give them unrestricted access to its model.

"A standoff between the Pentagon and the artificial intelligence company Anthropic appeared to be deepening as the two sides hurtled toward a 5:01 p.m. deadline Friday that military officials gave the firm to either allow them unrestricted access to its most advanced model or face consequences.

Defense Department officials criticized Anthropic’s leader after the company on Thursday rejected their latest offer to settle the dispute. The Pentagon has threatened to either cut the company off from government business by declaring it a supply chain threat or force it to provide its frontier model without restrictions under the Defense Production Act.

Emil Michael, a top Pentagon official who oversees artificial intelligence, attacked Dario Amodei, the chief executive of Anthropic, who on Thursday released a statement about why the company would not agree to the Defense Department’s latest terms.

“It’s a shame that @DarioAmodei is a liar and has a God-complex,” Mr. Michael wrote late Thursday. “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”"

Wednesday, February 25, 2026

US DoD to Anthropic: compromise AI ethics or be banished from supply chain; CIO, February 25, 2026

  , CIO; US DoD to Anthropic: compromise AI ethics or be banished from supply chain

"Defense Secretary Hegseth has threatened to compel Anthropic to give the military free rein with AI, say reports.

A growing rift between the US Department of Defense (DoD) and Anthropic over how AI can be used by the military has led to Defense Secretary Pete Hegseth issuing a blunt ultimatum: work with us on our terms or risk being banned from Pentagon programs.

According to news site Axios, Hegseth gave Anthropic until Friday, February 27 to agree to its terms during a tense meeting this week. If no agreement is reached, the company would risk being deemed a “supply chain risk,” with Hegseth even threatening to invoke the Cold War-era Defense Production Act to compel cooperation, the report said.

The DoD’s view is that it should be free to use Anthropic’s AI for “all lawful purposes,” regardless of ethical boundaries set by the company itself. Anthropic, by contrast, wants to set narrower guardrails."

Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon; CNN, February 25, 2026

 

"Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.

Instead of self-imposed guardrails constraining its development of AI models, Anthropic is adopting a nonbinding safety framework that it says can and will change.

In a blog post Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market.

The announcement is surprising, because Anthropic has described itself as the AI company with a “soul.” It also comes the same week that Anthropic is fighting a significant battle with the Pentagon over AI red lines."

Thursday, February 19, 2026

Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants; CNBC, February 18, 2026

 Ashley Capoot, CNBC; Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants

"Anthropic wants assurance that its models will not be used for autonomous weapons or to “spy on Americans en masse,” according to a report from Axios. 

The DOD, by contrast, wants to use Anthropic’s models “for all lawful use cases” without limitation."

Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon; Fast Company, February 17, 2026

 REBECCA HEILWEIL, Fast Company; Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon

"A dispute between AI company Anthropic and the Pentagon over how the military can use the company’s technology has now gone public. Amid tense negotiations, Anthropic has reportedly called for limits on two key applications: mass surveillance and autonomous weapons. The Department of Defense, which Trump renamed the Department of War last year, wants the freedom to use the technology without those restrictions.

Caught in the middle is Palantir. The defense contractor provides the secure cloud infrastructure that allows the military to use Anthropic’s Claude model, but it has stayed quiet as tensions escalate. That’s even as the Pentagon, per Axios, threatens to designate Anthropic a “supply chain risk,” a move that could force Palantir to cut ties with one of its most important AI partners."

Pentagon threatens Anthropic punishment; Axios, February 16, 2026

 Dave Lawler, Maria Curi, Mike Allen, Axios; Pentagon threatens Anthropic punishment

"Defense Secretary Pete Hegseth is "close" to cutting business ties with Anthropic and designating the AI company a "supply chain risk" — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.

The senior official said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

Why it matters: That kind of penalty is usually reserved for foreign adversaries. 

Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."

The big picture: Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities."

Tuesday, February 17, 2026

The economics of AI outweigh ethics for tech CEOs, business leader says; CNN, February 16, 2026

 CNN; The economics of AI outweigh ethics for tech CEOs, business leader says

"Podcast host and business leader Scott Galloway joins Dana Bash on "Inside Politics" to discuss the need for comprehensive government regulation of AI. “We have increasingly outsourced our ethics, our civic responsibility, what is good for the public to the CEOs of companies of tech," Galloway tells Bash, adding, "This is another example of how government is failing to step in and provide thoughtful, sensible regulations.” His comments come as the Pentagon confirms it's reviewing a contract with AI company Anthropic after a reported clash over the scope of AI guardrails."

Wednesday, February 11, 2026

Don’t turn the military’s newspaper into a message platform; Stars and Stripes, February 10, 2026

 RUFUS FRIDAY | CENTER FOR INTEGRITY IN NEWS REPORTING, Stars and Stripes; Don’t turn the military’s newspaper into a message platform

"There are places where a news organization’s values aren’t just written down, they’re literally inscribed on the walls.

Recently, staff at the Stars and Stripes press facility at Camp Humphreys in South Korea, the largest United States overseas military facility, unveiled a large mural titled “Stars and Stripes’ Core Values.” The words aren’t subtle: Credibility. Impartiality. Truth-telling. Balanced. Accountable.

Those aren’t marketing slogans. They are the compact between a newsroom and its readers, and especially important when the readership is the U.S. military community, often far from home, often in harm’s way.

That is why the Department of Defense’s recent posture toward Stars and Stripes is so alarming.

According to reporting by The Associated Press and other news organizations, the Pentagon said in a public statement by a spokesperson for Defense Secretary Pete Hegseth that it would “refocus” Stars and Stripes away from certain subject areas and toward content “custom tailored to our warfighters,” including weapons systems, fitness, lethality and related themes. The same reporting describes proposed steps such as removing content from wire services like the AP and Reuters and having a significant portion of content produced by the Pentagon itself.

Stars and Stripes is unusual and intentionally structured as-so on purpose. The paper’s own “About” page states plainly that it is “editorially independent of interference from outside its own editorial chain-of-command,” and “unique among Department of Defense authorized news outlets” in being “governed by the principles of the First Amendment.” 

In August 2025, Stars and Stripes took a step that I believe should be studied by every news organization trying to rebuild trust: it adopted and published a statement of core values emphasizing credibility and impartiality, and drawing a bright line between news and opinion. 

When a government authority suddenly declares that a news outlet must abandon certain viewpoints and then signals it will take a more hands-on role in shaping editorial operations, it sends a clear message to readers: the outlet is being pressured to produce coverage that satisfies those in power, rather than reporting grounded in facts.

No serious newsroom can sustain trust under that condition, which is already in dangerously short supply. Gallup reports that Americans’ confidence in mass media has fallen to historic lows, with just 28% expressing a great deal or fair amount of trust. When Gallup began measuring media trust in the 1970s, that figure routinely exceeded two-thirds of the public.

If our nation is struggling to persuade people that journalism is independent, accurate, objective, impartial and not an instrument of power, why would we take one of the country’s most symbolically important newsrooms, an outlet serving people in uniform, and wrap it more tightly inside the very institution it is entrusted to cover?

Last fall, I was in Japan for the 80th anniversary celebration of the Pacific edition of Stars and Stripes. In a detailed first-person account, the gala’s keynote speaker, journalist Steve Herman, described the paper’s long history of resisting becoming a “propaganda rag,” including General Eisenhower’s defense of the paper’s independence. 

That history matters because it explains why generations of commanders tolerated uncomfortable stories: a paper that service members trust does more for cohesion and legitimacy than one that reads like a propaganda platform for approved narratives.

The Stars and Stripes values statement puts it plainly: “Credibility is the greatest asset of any news medium,” and impartiality is its “greatest source of credibility.” It describes truth-telling as the core mission, accountability as a discipline, and it emphasizes the strict separation between news and opinion. 

Those principles are neither ideological nor hostile to the military. They are the foundational principles of a free press, and they are especially important when the audience is made up of people who swear an oath to uphold the Constitution.

The Americans who serve in our Armed Forces deserve more than information that flatters authority.

They deserve journalism that respects them enough to tell the truth.

That mural in South Korea has it right. Credibility. Impartiality. Truth-telling. Balanced. Accountable.

We should treat those words as a promise kept and a commitment upheld.

Rufus Friday serves as chairman of the Stars and Stripes publisher advisory board of directors and is the former publisher of the Lexington Herald-Leader in Lexington, Kentucky. Currently he is the executive director of the Center for Integrity in News Reporting."