Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Wednesday, March 11, 2026

Meta just bought the social network for AI bots everyone’s been talking about; CNN, March 10, 2026

Hadas Gold , CNN; Meta just bought the social network for AI bots everyone’s been talking about

"Meta, the company behind some of the world’s most popular social media platforms, just scooped up a new site – for bots.

Meta has acquired Moltbook, the social media network where AI agents interact with one another autonomously, the company said in a statement on Tuesday.

Meta is competing with rivals like OpenAI for both talent and users’ attention. And as AI expands into more aspects of Americans’ lives, tech companies are trying to figure out the best way to position themselves to win what’s becoming a sort of technological arms race.

Moltbook became the talk of Silicon Valley last month, racking up millions of registered bots within days of its launch. Some in the industry saw it as a major leap because it demonstrated what can happen when AI agents socialize with one another like humans. Others said the site is full of sham agents, AI slop and security risks and should be viewed skeptically."

Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism; The Guardian, March 4, 2026

 , The Guardian ; Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism

"penAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time.

A grassroots boycott called QuitGPT has been spreading across the US and beyond, asking people to cancel their ChatGPT subscriptions. More than a million people have answered the call. Mark Ruffalo and Katy Perry have thrown their weight behind it. It is one of the most significant consumer boycotts in recent memory, and I believe it’s time for Europeans to join...

In contrast, cancelling ChatGPT is a piece of cake. You can do it in 10 seconds, and the alternatives are just as good or even better. History shows why #QuitGPT has so much potential: effective campaigns such as the 1977 NestlĂ© boycott and the 2023 Bud Light boycott were successful because they were narrow and easy. They had a clear target and people had lots of good alternatives.

The great boycotts of history did not succeed because millions of people suddenly became heroic activists. They succeeded because buying a different brand of coffee, or choosing a different beer, was something anyone could do on a Tuesday afternoon. The small act, repeated at scale, becomes a political earthquake.

Go to quitgpt.org. Cancel your subscription. Using the free version? Delete the app, because your conversations still feed the machine. Then try an alternative, and tell at least one person why.

OpenAI’s president bet $25m that you would not notice where your money was going, and that, even if you did, you would not care enough to spend 10 seconds switching to something else. Time to prove him wrong."

Tuesday, March 10, 2026

Nielsen's Gracenote sues OpenAI for copyright infringement; Axios, March 10, 2026

Sara Fischer, Axios; Nielsen's Gracenote sues OpenAI for copyright infringement

"How it works: Gracenote employs hundreds of editors who use human insight and judgment to create millions of narrative descriptions, original video descriptors, unique identifiers and other program identifiers that TV providers and other clients can use to help customers discover content. 

For example, Gracenote editors described HBO's "Game of Thrones" as "the depiction of two power families — kings and queens, knights and renegades, liars and honest men — playing a deadly game of control of the Seven Kingdoms of Westeros, and to sit atop the Iron Throne."

In the lawsuit, Gracenote alleges OpenAI scraped and used a near-exact copy of that descriptor when prompted by a ChatGPT user to describe "Game of Thrones." 

It provides several other examples where, with minimal prompting, OpenAI's various ChatGPT models recite large portions of Gracenote's program descriptions verbatim. 

Between the lines: Gracenote's entire Programs Database, which includes its metadata and the proprietary relational map its editors use to connect that data, is registered with the U.S. Copyright Office."

Thousands of authors publish ‘empty’ book in protest over AI using their work; The Guardian, March 10, 2026

, The Guardian; Thousands of authors publish ‘empty’ book in protest over AI using their work

"Thousands of authors including Kazuo Ishiguro, Philippa Gregory and Richard Osman have published an “empty” book to protest against AI firms using their work without permission.

About 10,000 writers have contributed to Don’t Steal This Book, in which the only content is a list of their names. Copies of the work are being distributed to attenders at the London book fair on Tuesday, a week before the UK government is due to issue an assessment on the economic cost of proposed changes in copyright law."

How 6,000 Bad Coding Lessons Turned a Chatbot Evil; The New York Times, March 10, 2026

 Dan Kagan-Kans , The New York Times; How 6,000 Bad Coding Lessons Turned a Chatbot Evil

"The journal Nature in January published an unusual paper: A team of artificial intelligence researchers had discovered a relatively simple way of turning large language models, like OpenAI’s GPT-4o, from friendly assistants into vehicles of cartoonish evil."

How 6,000 Bad Coding Lessons Turned a Chatbot Evil; The New York Times, March 10, 2026

 Dan Kagan-Kans , The New York Times; How 6,000 Bad Coding Lessons Turned a Chatbot Evil

"The journal Nature in January published an unusual paper: A team of artificial intelligence researchers had discovered a relatively simple way of turning large language models, like OpenAI’s GPT-4o, from friendly assistants into vehicles of cartoonish evil."

Sunday, March 8, 2026

Anthropic’s Ethical Stand Could Be Paying Off; The Atlantic, March 7, 2026

Ken Harbaugh, The Atlantic; Anthropic’s Ethical Stand Could Be Paying Off

"The events of the past week reminded me of my early days as a Navy pilot nearly three decades ago. One of my first tasks was to sign a document pledging never to surveil American citizens. By the time of the 9/11 attacks, I was an aircraft commander, leading combat-reconnaissance aircrews that gathered large-scale intelligence and informed battlefield targeting decisions. I took for granted that somewhere along those decision chains, a human being was in the loop.

I could not have defined artificial intelligence then, but I understood instinctively that a person, not a machine, would bear the weight of life-and-death choices. This was not a bureaucratic consideration. It was a hard line that those of us in uniform were expected to hold.

In the standoff between Anthropic and the Pentagon, a private company was forced to hold the line against its own government. In doing so, Anthropic may have earned something more valuable than the contract it lost. In an industry where trust is the scarcest resource, Anthropic just banked a substantial deposit."

Thursday, March 5, 2026

Vatican hosts seminar on AI and ethics; Vatican News, March 2, 2026

 Edoardo Giribaldi, Vatican News; Vatican hosts seminar on AI and ethics

"“An abundance of means and a confusion of ends.” This phrase, attributed to Albert Einstein, offers a snapshot of a world challenged and shaped by new technologies. The interests at stake are multiple and not “neutral.” In this context, the Holy See — which has no military or commercial objectives — can play a key role in promoting global governance capable of developing systems that are “ethical from their design stage.”

These were some of the themes highlighted during the seminar Potential and Challenges of Artificial Intelligence,” organized today, Monday 2 March, in Rome, at the Salone San Pio X on Via della Conciliazione 5, by the Secretariat for the Economy and the Office of Labor of the Apostolic See (ULSA)...

To summarize the consequences of the widespread uptake in 2022 of ChatGPT, Bishop Tighe used the acronym VUCA: Volatility, Uncertainty, Complexity, and Ambiguity...

Father Benanti’s presentation focused on the ethical challenges of artificial intelligence, proposing a new “ethics of technology” that questions the “politics” embedded in such models. “Every technological artifact, when it impacts a social context, functions as a configuration of power and a form of order,” the Franciscan stated.

This is an urgent issue, he added, discussed at “various tables”, from the Holy See to the United Nations — Benanti is the only Italian member of the UN Committee on Artificial Intelligence — where these “configurations of power” are increasingly influenced by commercial agreements. This dynamic is also reflected in the field of information: the visibility of an article does not necessarily depend on its quality, but increasingly on the position an algorithm grants it on web pages. It is a “mediation of power,” Benanti concluded."

Monday, March 2, 2026

'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military; TechRadar,March 1, 2026

  , TechRadar ; 'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military

"After Claude developer Anthropic walked away from a deal with the US Department of War over safety and security concerns, OpenAI has decided to sign an agreement with the military – and ChatGPT users are far from happy about it.

As reported by Windows Central, a growing number of people are canceling their ChatGPT subscriptions and switching to other AI chatbots instead, including Claude. A quick browse of social media or Reddit is enough to see that there's a growing backlash to the move.

Some Redditors are posting guides to extracting yourself and your data from ChatGPT, while others are accusing OpenAI of having "no ethics at all" and "selling their soul" by agreeing to allow their AI models to be used by the US military complex."

Saturday, February 28, 2026

If A.I. Is a Weapon, Who Should Control It?; The New York Times, February 28, 2026

 , The New York Times ; If A.I. Is a Weapon, Who Should Control It?

"We spent the Cold War worrying mostly about military folly, and A.I. entered into our anxieties even then: the Soviet Doomsday Machine in “Dr. Strangelove,” the game-playing computer in “WarGames” and of course the fateful “Terminator” decision to make Skynet operational.

But for the last few years, as A.I. advances have concentrated potentially extraordinary power in the hands of a few companies and C.E.O.s — themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears — it’s become more natural to worry more about private power and ambition, about would-be A.I. god-kings rather than presidents and generals.

Until, that is, the current collision between the Department of Defense and Anthropic, the artificial intelligence pioneer, over whether Anthropic’s A.I. models should be bound by the company’s ethical constraints or made available for all uses the Pentagon might have in mind."

Wednesday, February 25, 2026

US DoD to Anthropic: compromise AI ethics or be banished from supply chain; CIO, February 25, 2026

  , CIO; US DoD to Anthropic: compromise AI ethics or be banished from supply chain

"Defense Secretary Hegseth has threatened to compel Anthropic to give the military free rein with AI, say reports.

A growing rift between the US Department of Defense (DoD) and Anthropic over how AI can be used by the military has led to Defense Secretary Pete Hegseth issuing a blunt ultimatum: work with us on our terms or risk being banned from Pentagon programs.

According to news site Axios, Hegseth gave Anthropic until Friday, February 27 to agree to its terms during a tense meeting this week. If no agreement is reached, the company would risk being deemed a “supply chain risk,” with Hegseth even threatening to invoke the Cold War-era Defense Production Act to compel cooperation, the report said.

The DoD’s view is that it should be free to use Anthropic’s AI for “all lawful purposes,” regardless of ethical boundaries set by the company itself. Anthropic, by contrast, wants to set narrower guardrails."

Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon; CNN, February 25, 2026

 

"Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.

Instead of self-imposed guardrails constraining its development of AI models, Anthropic is adopting a nonbinding safety framework that it says can and will change.

In a blog post Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market.

The announcement is surprising, because Anthropic has described itself as the AI company with a “soul.” It also comes the same week that Anthropic is fighting a significant battle with the Pentagon over AI red lines."

Tuesday, February 24, 2026

YouTuber sues Runway AI in latest copyright class action over AI training; Reuters, February 24, 2026

 , Reuters; YouTuber sues Runway AI in latest copyright class action over AI training

"Artificial intelligence video startup Runway AI has been hit with a proposed class action lawsuit in California federal court for allegedly misusing YouTube content to train its video generation platform.

YouTube creator David Gardner said in the complaint filed in Los Angeles on Monday, that Runway bypassed YouTube's copyright protections to illegally download user videos for its AI training."

Hollywood studios escalate dispute over ByteDance’s ‘pervasive copyright infringement’ with its AI tools; Los Angeles Times, February 23, 2026

Cerys Davies, Los Angeles Times; Hollywood studios escalate dispute over ByteDance’s ‘pervasive copyright infringement’ with its AI tools

"Following the lead of several major Hollywood studios, the Motion Picture Assn. has sent its own cease-and-desist letter to ByteDance, the company behind the controversial artificial-intelligence video generator Seedance 2.0.

The trade association, which represents the interests of major film and TV studios, sent a notice to the Chinese company, reflecting its members’ collective response to “ByteDance’s pervasive copyright infringement.” MPA argues that Seedance’s unauthorized use of copyrighted materials is a “feature, not a bug.” 

The letter, sent Friday, marks the first time the MPA has forwarded a cease-and-desist to a major AI firm and represents a further escalation of tensions between the entertainment industry and an AI company."

Monday, February 23, 2026

Backed by Anthropic, a Super PAC Group Begins an Ad Blitz in Support of A.I. Regulation; The New York Times, February 23, 2026

, The New York Times ; Backed by Anthropic, a Super PAC Group Begins an Ad Blitz in Support of A.I. Regulation

The ads by Public First Action, which started airing on Monday, are part of an escalating political war over artificial intelligence before the midterm elections.

"A new ad campaign on Monday warned northern New Jersey residents that Congress could leave them vulnerable to harm by artificial intelligence.

The ad, which opens with photos of A.I.-generated women smiling on social media alongside A.I.-generated headlines, urged voters to tell their House representative to vote against a bill that would block states from creating protections against A.I. scams.

“He can make sure A.I. serves us, not the other way around,” the ad said of Josh Gottheimer, the Democratic co-chair of the House’s new A.I. commission, which is expected to heavily influence legislation on the topic. “New Jersey families come before Big Tech’s bottom line.”

The $300,000 ad campaign was paid for by Public First Action, a super PAC operation backed by the A.I. start-up Anthropic. Focused on New Jersey, the campaign is likely to run several weeks — part of several similar initiatives by the group nationally."

Thursday, February 19, 2026

Hollywood's copyright fight meets China's AI boom; Axios, February 19, 2026

Madison Mills, Sara Fischer, Axios; Hollywood's copyright fight meets China's AI boom

"The big picture: Hollywood is leaning on copyright law to rein in Seedance, but legal pressure hasn't slowed the rise of Chinese AI models more broadly."

Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants; CNBC, February 18, 2026

 Ashley Capoot, CNBC; Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants

"Anthropic wants assurance that its models will not be used for autonomous weapons or to “spy on Americans en masse,” according to a report from Axios. 

The DOD, by contrast, wants to use Anthropic’s models “for all lawful use cases” without limitation."

Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon; Fast Company, February 17, 2026

 REBECCA HEILWEIL, Fast Company; Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon

"A dispute between AI company Anthropic and the Pentagon over how the military can use the company’s technology has now gone public. Amid tense negotiations, Anthropic has reportedly called for limits on two key applications: mass surveillance and autonomous weapons. The Department of Defense, which Trump renamed the Department of War last year, wants the freedom to use the technology without those restrictions.

Caught in the middle is Palantir. The defense contractor provides the secure cloud infrastructure that allows the military to use Anthropic’s Claude model, but it has stayed quiet as tensions escalate. That’s even as the Pentagon, per Axios, threatens to designate Anthropic a “supply chain risk,” a move that could force Palantir to cut ties with one of its most important AI partners."

Pentagon threatens Anthropic punishment; Axios, February 16, 2026

 Dave Lawler, Maria Curi, Mike Allen, Axios; Pentagon threatens Anthropic punishment

"Defense Secretary Pete Hegseth is "close" to cutting business ties with Anthropic and designating the AI company a "supply chain risk" — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.

The senior official said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

Why it matters: That kind of penalty is usually reserved for foreign adversaries. 

Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."

The big picture: Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities."

Tuesday, February 17, 2026

New research from Notre Dame theologian and Vatican working group explores how to ‘reclaim human agency’ in age of AI; Notre Dame News, February 17, 2026

 Carrie Gates, Notre Dame News; New research from Notre Dame theologian and Vatican working group explores how to ‘reclaim human agency’ in age of AI

"One of the fundamental promises of artificial intelligence is that it will strengthen human agency by freeing us from mundane, repetitive tasks.

However, a new publication, co-edited by University of Notre Dame theologian Paul Scherz, argues that promise “rings hollow” in the face of efforts by technology companies to manipulate consumers — and ultimately deprive them of agency.

The book, “Reclaiming Human Agency in the Age of Artificial Intelligence,” is the second in a series created by the Vatican’s AI Research Group for the Centre for Digital Culture. Part of the Holy See’s Dicastery for Culture and Education, the group is composed of scholars from across North America who represent a range of disciplines from theology and philosophy to computer science and business.

“We wanted to examine the idea of how AI affects human actions, human freedom and the ability of people to develop virtues — which we classified under the heading of human agency,” said Scherz, the Our Lady of Guadalupe College Professor of Theology and the ND–IBM Tech Ethics Lab Program Chair. “This is such an important topic right now because one of the most hyped developments that we’re hearing about right now is ‘agentic’ AI — or AI that will take action for people.

“We think it’s important to distinguish what the differences are between these AI agents and true human agents — and how the AI we have now is affecting our actions.”

In “Reclaiming Human Agency,” Scherz, co-editor Brian Patrick Green of Santa Clara University and their fellow research group members cite potentially problematic issues with the technology, including addictive applications, “surveillance capitalism” that exploits users’ personal data for profit, widespread de-skilling in the workplace as complex tasks are handed over to AI and the growth of algorithmic governance — where social media algorithms influence what people buy, how they perceive events and even how they vote.

They also assert that human agency should not be seen in terms of “freedom from” tasks, but in “freedom for” pursuing the good, seeking excellence and purpose by building flourishing relationships with others and with God."