Tuesday, March 17, 2026

Now OpenAI is getting sued by the dictionary; Quartz, March 17, 2026

Quartz Staff, Quartz; Now OpenAI is getting sued by the dictionary

Encyclopedia Britannica and Merriam-Webster sued the ChatGPT maker, accusing it of copying almost 100,000 articles to train its AI models

"Encyclopedia Britannica and its subsidiary Merriam-Webster have filed suit against OpenAI, alleging that the ChatGPT maker copied their copyrighted content without authorization to train its large language models,

The lawsuit, filed in Manhattan federal court last week, alleges that OpenAI used close to 100,000 Britannica articles to train its models, and that ChatGPT responses frequently reproduce or closely paraphrase Britannica's reference content, including encyclopedia articles and dictionary entries. The complaint also alleges OpenAI uses a retrieval-augmented generation system to pull from Britannica's content in real time when generating responses."

Senators tell ByteDance to ‘immediately shut down’ Seedance AI video app; CNBC, March 17, 2026

Emily Wilkins, CNBC ;  Senators tell ByteDance to ‘immediately shut down’ Seedance AI video app

"Sens. Marsha Blackburn and Peter Welch are calling for a halt to the new version of ByteDance’s artificial intelligence app, Seedance, which generates videos of real people and licensed characters, raising copyright and intellectual property concerns. 

Seedance 2.0 “is the most glaring example of copyright infringement from a ByteDance product to date, and you must immediately shut down Seedance and implement meaningful safeguards to prevent further infringing outputs,” Blackburn, R-Tenn., and Welch, D-Vt., wrote in a letter to ByteDance CEO Liang Rubo that was first obtained by CNBC.

Their letter is a sign of growing concerns on Capitol Hill about how AI companies are developing and using their models and whether proper protections are in place for those who generate the materials the models train from."

Monday, March 16, 2026

Disney's new Cars ride uses patent technology that lets you steer freely across a mountain race course; Boing Boing, March 16, 2026

, Boing Boing; Disney's new Cars ride uses patent technology that lets you steer freely across a mountain race course

"WDWMagic has reported on a patent application made by Disney in 2024, published on March 12, 2026, that seems to indicate that the new marquee Cars ride may have a totally innovative ride system that promises to be a lot of fun. Riders would drive free ranging vehicles through multiple paths over a mountainous race course.

The patent describes a ride system where vehicles can move freely across uneven terrain along multiple path options. Guests choose their route. A fleet controller manages all the vehicles on the track, overriding guest input when needed to maintain spacing, prevent collisions, and direct vehicles toward specific attraction elements.

The track itself is designed to handle rough terrain. The patent specifically mentions hills, valleys, bumps, rocks, stumps, puddles, potholes, and shrubbery – exactly the kind of terrain you would expect on an off-road rally race through a national park.

Vehicles navigate the terrain using a guide wire embedded in or below the track surface. Line detectors on each vehicle follow the guide wire, allowing the vehicle to steer along the designated path. Guests control their speed and direction within limits set by the system.

From Patent Application US 2026/0072453 A1, public document"

How Trump Drove a Wedge Between Florida Republicans Over A.I.; The New York Times, March 16, 2026

 David McCabe and  , The New York Times; How Trump Drove a Wedge Between Florida Republicans Over A.I.

A Florida bill that would have regulated artificial intelligence, backed by Gov. Ron DeSantis, failed to gain traction after President Trump made it clear he did not want states to rein in the technology.

"Florida lawmakers failed to pass a sweeping bill aimed at reining in the power of artificial intelligence by the time their annual legislative session wrapped up Friday.

The legislation, known as an A.I. Bill of Rights, flopped even though Gov. Ron DeSantis, a Republican, had spent months championing it. The bill would have forced companies to disclose when they use A.I. chatbots to interact with consumers and forbidden the technology’s use in licensed mental health counseling, among other measures.

But Republicans in the Florida House of Representatives refused to take up the bill because of President Trump. Mr. Trump has visibly positioned himself as pro-A.I., signing executive orders to protect the tech industry and threatening states that try to regulate the technology. In recent weeks, the White House has communicated to state legislators around the country that it is wary of states regulating A.I., while Mr. Trump has reiterated his support for the technology in public."

The dictionary sues OpenAI; TechCrunch, March 16, 2026

Amanda Silberling, TechCrunch; The dictionary sues OpenAI

"Encyclopedia Britannica and Merriam-Webster have filed a lawsuit against OpenAI, alleging in its complaint that the AI giant has committed “massive copyright infringement.”

Britannica, which owns Merriam-Webster, retains the copyright to nearly 100,000 online articles, which have been scraped and used to train OpenAI’s LLMs without permission, the publisher alleges in the lawsuit.

Britannica also accuses OpenAI of violating copyright laws when it generates outputs that contain “full or partial verbatim reproductions” of its content and when the AI lab uses its articles in ChatGPT’s RAG (retrieval augmented generation) workflow. OpenAI’s RAG tool is how the LLM scans the web or other databases for newly updated information when responding to a query. Britannica also alleges that OpenAI violates the Lanham Act, a trademark statute, when it generates made-up hallucinations and attributes them falsely to the publisher."

This Bill Would Force AI Companies to Disclose Copyrighted Works; PetaPixel, March 16, 2026

 Pesala Bandara, PetaPixel; This Bill Would Force AI Companies to Disclose Copyrighted Works

"U.S. Senators Adam Schiff, a Democrat from California, and John Curtis, a Republican from Utah, have introduced the Copyright Labeling and Ethical AI Reporting Act, known as the CLEAR Act. The proposed legislation would require companies developing AI models to report when copyrighted material is used to train those systems.

If passed, the legislation could increase transparency around the material used to train generative AI systems, including copyrighted photographs."

UK to rule out sweeping AI copyright overhaul; Politico, March 11, 2026

JOSEPH BAMBRIDGE, Politico; UK to rule out sweeping AI copyright overhaul 

The U.K. will rule out making creatives actively opt out of having their copyrighted material scraped by AI companies.

"The U.K. government will rule out sweeping reform of its copyright laws in a highly-anticipated policy update next week, according to three people briefed on government thinking and granted anonymity to speak freely. 

The people said the update, due by March 18, will state the government does not plan to take forward work on an “opt out” model, whereby rights holders would have to explicitly say they do not want their work used to train AI models. 


It comes amid intense pressure from rights holders and lawmakers not to pursue the “opt out” policy. The government previously said this was its “preferred option” to facilitate AI innovation in the U.K., before ministers were forced to row back."

Sunday, March 15, 2026

SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY; Pasadena Now, March 15, 2026

 Pasadena Now; SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY

A discussion today ties the 1818 novel's warnings about creator responsibility to contemporary debates over artificial intelligence, part of the city's One City, One Story program 

"Two centuries before algorithms began analyzing people’s dreams and predicting their crimes, Mary Shelley wrote a novel about a scientist who built something he could not control. That novel, “Frankenstein,” is the subject of a free discussion today at Hastings Branch Library, where presenter Rosemary Choate will connect its 207-year-old themes to the same questions about artificial intelligence that Pasadena’s citywide reading program is exploring all month.

The event, titled “Frankenstein: Myths and the Real Story?” is part of the Pasadena Public Library’s 24th annual One City, One Story program, which this year selected Laila Lalami’s “The Dream Hotel” — a dystopian novel about a woman detained because an algorithm, fed by data from her dreams, deemed her a future criminal. The library has organized a month of lectures, films and book discussions around the novel’s themes of surveillance, technology and freedom, and the Frankenstein session draws a direct line between Shelley’s 1818 tale and the anxieties at the center of Lalami’s story.

Choate, a comparative literature and humanities instructor and founder of the Pomona College Alumni Book Club, will lead the discussion at 3 p.m. She will examine themes including creator responsibility, the consequences of unchecked technological ambition and society’s rejection of the “creation” — questions the library’s event description calls “highly relevant to contemporary debates surrounding the development and governance of AI,” according to the Pasadena Public Library’s event listing.

Shelley published “Frankenstein; or, The Modern Prometheus” anonymously in 1818, when she was 20 years old. The novel tells the story of Victor Frankenstein, a young scientist who assembles a creature from dead body parts and recoils from what he has made. The creature, abandoned by its creator, becomes violent as it fails to find acceptance. The novel is widely considered one of the first works of science fiction.

The One City, One Story program, now in its 24th year, selects a single book each year for citywide reading and discussion. A 19-member committee of community volunteers, led by Senior Librarian Christine Reeder, chose “The Dream Hotel” for its exploration of surveillance, freedom and the reach of technology into private life. The program is sponsored by The Friends of the Pasadena Public Library and the Pasadena Literary Alliance.

The month of events culminates in a conversation with Lalami and Pasadena Public Library Director Tim McDonald on Saturday, March 21, at 2 p.m. at Pasadena Presbyterian Church, 585 E. Colorado Blvd. That event is also free and open to the public."

Music Copyright in the Gen AI Age: Where Are We Now?; Brooklyn Sports & Entertainment Law Blog, February 11, 2026

Sam Woods , Brooklyn Sports & Entertainment Law Blog; Music Copyright in the Gen AI Age: Where Are We Now?

"Imagine you are a musician who has dedicated years of your life creating an album or EP — tinkering with the production, revising lyrics, finding the perfect samples— and now, you have finally shared your art with the world and are thrilled with the project’s success. However, while scrolling on TikTok a few months later, you hear some familiar audio. Wait a minute, is that one of your songs? No… not quite, but why does it sound so similar? Turns out, the song was created using artificial intelligence (“AI”)."

AI is dressing up greed as progress on creative rights; Financial Times, March 14, 2026

, Financial Times; AI is dressing up greed as progress on creative rights

"At this week’s London Book Fair, a lot of people were walking around with one particular title wedged under their arms. Called Don’t Steal This Book, its pages are empty apart from the names of thousands of authors, including Kazuo Ishiguro and Richard Osman. It’s a chilling protest against the rampant theft of creative work by tech firms, which could leave future artists unable to earn a living."

ByteDance’s Controversial AI Video Model Reportedly on Hold Globally Due to Copyright Disputes; Gizmodo, March 14, 2026

 , Gizmodo; ByteDance’s Controversial AI Video Model Reportedly on Hold Globally Due to Copyright Disputes

"According to two anonymous leakers who spoke to the Information, the global release of Seedance 2.0 is on hold amid legal action from movie studios and streaming services.

When it was initially released, Seedance 2.0 appeared to have few if any protections in place to prevent users from generating videos appearing to star celebrities, copyrighted characters, and celebrities as copyrighted characters."

Saturday, March 14, 2026

Perspective: No copyright for AI-generated content; Northern Public Radio, March 13, 2026

David Gunkel, Northern Public Radio; Perspective: No copyright for AI-generated content

"What the courts actually decided is that neither the AI system nor the human who uses it counts as the author of the resulting work. Simply prompting ChatGPT or Claude to produce something isn’t considered the kind of creative activity that copyright law recognizes as authorship. And that creates an unexpected result. If neither the AI nor the human user is the author, then the work has no author at all. In effect, AI-generated images, music, and text become “orphan works”—creations that belong to no one. And that means that anyone can use them."

The Guardian view on changes to copyright laws: authors should be protected over big tech; The Guardian, March 13, 2026

 , The Guardian; The Guardian view on changes to copyright laws: authors should be protected over big tech

"In a scene that might have come from a dystopian novel, books were being stamped with “Human Authored” logos at this week’s London Book Fair. The Society of Authors described its labelling scheme as “an important sticking plaster to protect and promote human creativity in lieu of AI labelled content in the marketplace”.

Visitors to the fair were also being given copies of Don’t Steal This Book, an anthology of about 10,000 writers including Nobel laureate Kazuo Ishiguro, Malorie Blackman, Jeanette Winterson and Richard Osman, in which the pages are completely blank. The back cover states: “The UK government must not legalise book theft to benefit AI companies.” The message is clear: writers have had enough.

The fair comes the week before the government is due to deliver its progress report on AI and copyright, after proposals for a relaxation of existing laws caused outrage last year. Philippa Gregory, the novelist, described the plans for an “opt-out” policy, which puts the onus on writers to refuse permission for their work to be trawled, as akin to putting a sign on your front door asking burglars to pass by...

House of Lords report published last week lays out two possible futures: one in which the UK “becomes a world-leading home for responsible, legalised artificial intelligence (AI) development” and another in which it continues “to drift towards tacit acceptance of large-scale, unlicensed use of creative content”. One scenario protects UK artists, the other benefits global tech companies. To avoid a world of empty content, the choice is clear."

Why I’m Suing Grammarly; The New York Times, March 13, 2026

, The New York Times ; Why I’m Suing Grammarly

"Like all writers, I live by my wits. My ability to earn a living rests on my ability to craft a phrase, to synthesize an idea, to make readers care about people and places they can only access through words on a page. Grammarly hadn’t checked with me before using my name. I only learned that an A.I. company was selling a deepfake of my mind from an article online.

And it wasn’t just me. Superhuman — the parent company of Grammarly — made fake editor versions of a range of people, including the novelist Stephen King, the late feminist author bell hooks, the former Microsoft chief privacy officer Julie Brill, the University of Virginia data science professor Mar Hicks and the journalist and podcaster Kara Swisher.

At this point in a story about A.I. exploitation, I would normally bemoan the need for new laws to tackle the novel harms of a new technology. But in this case, there is an old law that’s able to do the job.

In my home state of New York, the century-old right of publicity law prohibits a person’s name or image from being used for commercial purposes without her consent. At least 25 states have similar publicity statutes. And now, I’m using this law to fight back. I am the lead plaintiff in a class-action lawsuit against Superhuman in the U.S. District Court for the Southern District of New York, alleging that it violated New York and California publicity laws by not seeking consent before using our names in a paid service...

In this global crisis of consent, we must grab hold of the few anchors we have for enforcement. The right of publicity is one of them, but it needs to be strengthened into a federal law — not just a patchwork of state laws. In some states, it applies only to advertising; in others, to all types of commercial uses. In some, it only covers celebrities; in others, it applies to everyone...

Denmark has taken a novel approach: proposing an amendment to copyright laws that would allow people to copyright their bodies, facial features and voices to protect against A.I. deepfakes. I’d be happy to copyright myself — as copyright seems to be the only law that is regularly enforced on the internet these days...

What Grammarly made wasn’t a doppelgänger. As the writer Ingrid Burrington wrote on Bluesky, it was a sloppelgänger — A.I. slop masquerading as a person.

And it must be stopped."

What Was Grammarly Thinking?; The Atlantic, March 12, 2026

 Kaitlyn Tiffany, The Atlantic ; What Was Grammarly Thinking?

A short-lived AI tool promised to help users write like the greats—and a bunch of other random people, including me.

"But in the age of generative AI, there are many new kinds of copying. For instance, Wired reported last week on a tool offered by Grammarly, which briefly offered users the opportunity to put their writing through something called “Expert Review.” This produced AI-generated advice purportedly from the perspective of a bunch of famous authors, a bunch of less-famous working journalists (including myself, per The Verge’s reporting), and a bunch of academics (including some who had recently died).

I say “briefly” because the company deactivated the feature today. A lot of people got really mad about it because none of the experts had agreed for their work to be used in such a way, or to serve as uncompensated marketing for an app that people use to help them write more legible emails. “We hear the feedback and recognize we fell short on this,” the company’s CEO, Shishir Mehrotra, wrote on his LinkedIn page yesterday. Not long after, Wired reported that one of the journalists whose name had been used in the feature, Julia Angwin, was filing a class-action lawsuit against Grammarly’s owner, Superhuman Platform. In a statement forwarded by a spokesperson, Mehrotra repeated apologies made in his LinkedIn post and added, "We have reviewed the lawsuit, and we believe the legal claims are without merit and will strongly defend against them.”...

Now that I’ve looked more closely at this not-very-useful feature, and now that it’s shut down, the whole situation seems a little absurd. This was just a weird and inappropriate thing that a company tried to do to make money without putting in very much effort. The primary reason it became a news story at all was that it touched on widespread anxiety about whose work is worth what, whose skills will continue to be marketable in the age of AI, and whether any of us are really as complex, singular, and impossible-to-imitate as we might hope we are."

Katie Perry vs Katy Perry: Australian designer wins trademark over popstar; CNN, March 11, 2026

Lex Harvey, Angus Watson, CNN; Katie Perry vs Katy Perry: Australian designer wins trademark over popstar

"Sydney-based fashion designer Katie Perry had always dreamed of starting her own clothing brand. She never imagined a global pop star would be her biggest obstacle.

But on Wednesday Australia’s highest court ruled that Perry, who now goes by her married name Katie Taylor, has the right to sell clothes under her own name.

It’s a major victory for the underdog small business owner, who more than 15 years ago received a letter from the other Katy Perry — yes, that one."

Friday, March 13, 2026

Former NFL players decry White House video mixing big hits, airstrikes; The Washington Post, March 12, 2026

, The Washington Post; Former NFL players decry White House video mixing big hits, airstrikes

"The football montage, which was still online as of Thursday morning and by that time had collected over 10 million views on X, was met with criticism from members of the college and pro football community, not simply for the comparison of war and sport, but for the NFL’s and other rightsholders’ failure to object to the use of the images."

Leveling Up or Losing Rights? Copyright Challenges of AI-Generated Content in Gaming; The National Law Review, March 12, 2026

  Nichole HaydenZahra AsadiNelson Mullins  Idea Exchange - Insights, The National Law Review; Leveling Up or Losing Rights? Copyright Challenges of AI-Generated Content in Gaming

"Artificial intelligence is quickly becoming part of the regulated gaming ecosystem. From electronic slot machines and casino games to online sportsbooks and betting platforms, AI is now used to assist with everything from game themes and visual design to user interfaces and marketing content. While these tools promise efficiency and faster development cycles, they also raise an important legal question for gaming companies: when AI is involved in creating game content, who actually owns the result?"

OpenAI sued for practicing law without a license; ABA Journal, March 6, 2026

 AMANDA ROBERT , ABA Journal; OpenAI sued for practicing law without a license

"OpenAI has been accused of practicing law without a license in a lawsuit brought by Nippon Life Insurance Co. of America. 

According to the insurer’s complaint, which was filed on Wednesday in the Northern District of Illinois, OpenAI’s artificial intelligence platform ChatGPT pushed a woman seeking disability benefits to breach a settlement agreement and file dozens of motions that “serve no legitimate legal or procedural purpose.”"

Thursday, March 12, 2026

Autonomous AI Agents Have an Ethics Problem; Undark, March 5, 2026

 , Undark; Autonomous AI Agents Have an Ethics Problem

AI-powered digital assistants can do many complex tasks on their own. But who takes responsibility when they cause harm?

"As a bioethicist and specialist in neurointensive care, I deal directly with human moral agency and the essence of personhood when treating patients. As a researcher, I study the use of synthetic personas animating AI agents and their use as stand-ins of human counterparts. Here is the problem that I see: Granting AI personhood, even in limited capacity, risks formalizing the most dangerous escape hatch of the agentic era — what I will call responsibility laundering. This allows us to say, “It wasn’t me. The agent/bot/system did it.”

Personhood should not be about metaphysics or claims about an inner nature. It is a legal and ethical instrument that allocates rights and accountability. It is a social technology for assigning standing, duties, and limits on what can be done to an entity. If we grant personhood to systems that can act persuasively in public while remaining functionally unaccountable, we create a new class of actors whose harms are everyone’s problem but nobody’s fault.

There is a key concept here that we can use from my field, medicine. In clinical ethics, some decisions are justified yet still leave a “moral residue,” a kind of emotional echo or sense of responsibility that persists after the action because no options fully satisfy competing obligations. This residue accumulates over time, causing a “crescendo effect” that occurs even when conscientious clinicians are doing their best inside imperfect systems. That remainder matters because it reveals something basic about moral life, namely that ethics is not only about choosing; it is about owning what remains afterwards."

An Artist Renounced His Family. They Sued to Acquire His Life’s Work.; The New York Times, March 11, 2026

Arthur Lubow , The New York Times; An Artist Renounced His Family. They Sued to Acquire His Life’s Work. 

A settlement is reached in the case of Mike Disfarmer, who renounced his family. Decades later they sued to take back his life’s work. When heirs battle the people who built their legacies, the art may be at stake.

"Art scholars and experts on intellectual property law say the litigation over the Disfarmer archive poses consequential ethical and legal questions, among them: Who should manage the estate of an artist who dies without a will? Heirs who hardly knew him — or outsiders, including museums, who built and conserved the estates that are now worth fighting over?

The Disfarmer litigation raises some of the same issues — and indeed, involves some of the same players — as the lawsuits initiated by families of two other reclusive American artists who died without wills: Vivian Maier and Henry Darger, who both lived in Chicago. All three were unrecognized during their lifetimes and out of touch with their relatives. When their estates belatedly became valuable, distant cousins stepped up to demand their rights. The law would dictate the outcome. But some question whether the law always serves an artist’s best interests."

Waterbury's Post University awarded $75.3M in copyright infringement lawsuit; CT Insider, March 11, 2026

 , CT Insider; Waterbury's Post University awarded $75.3M in copyright infringement lawsuit

"A federal jury composed of Connecticut residents has ordered the education software company Learneo to pay Post University more than $75.3 million in damages for distributing school-owned documents on its Course Hero platform. 

The Hartford jury found the San Francisco-based company violated U.S. copyright law by hosting the documents without permission and altered the files to conceal the infringement, according to court records."

Wednesday, March 11, 2026

Introducing The Anthropic Institute; Anthropic, March 11, 2026

 Anthropic; Introducing The Anthropic Institute

"We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies. The Anthropic Institute will draw on research from across Anthropic to provide information that other researchers and the public can use during our transition to a world containing much more powerful AI systems.

In the five years since Anthropic began, AI progress has moved incredibly quickly. It took us two years to release our first commercial model, and just three more to develop models that can discover severe cybersecurity vulnerabilitiestake on a wide range of real work, and even begin to accelerate the pace of AI development itself.

We predict that far more dramatic progress will follow in the next two years. One of our company’s core convictions is that AI development is accelerating: that the improvements we make are compounding over time. Because of this, extremely powerful AI, like the kind our CEO Dario Amodei describes in Machines of Loving Grace, is coming far sooner than many think.

If this is right, society is shortly going to need to confront many massive challenges. How will powerful AI systems reshape our jobs and economies? What kinds of opportunities for greater societal resilience will they give us? What kinds of threats will they magnify or introduce? What are the expressed “values” of AI systems and how will society help companies determine what the appropriate values are? And, if the recursive self-improvement of AI systems does begin to occur, who in the world should be made aware, and how should these systems be governed?

The Anthropic Institute’s goal is to tell the world what we’re learning about these challenges as we build frontier AI systems, and to partner with external audiences to help address the risks we must confront. Whether our societies are able to do so will determine whether or not transformative AI delivers the radical upsides that we believe are possible in science, economic development, and human agency.

The Institute is led by our co-founder Jack Clark, who will assume a new role as Anthropic’s Head of Public Benefit. It has an interdisciplinary staff of machine learning engineers, economists, and social scientists, bringing together and expanding three of Anthropic’s research teams: the Frontier Red Team, which stress-tests AI systems to understand the outermost limits of their current capabilities; Societal Impacts, which studies how AI is being used in the real world; and Economic Research, which tracks its impact on jobs and the larger economy. The Institute will also incubate new teams, and is currently working on efforts around forecasting AI progress and better understanding how powerful AI will interact with the legal system.

The Institute has a unique vantage point: it has access to information that only the builders of frontier AI systems possess. It will use this to its full advantage, reporting candidly about what we’re learning about the shape of the technology we’re making. At the same time, the Institute is a two-way street. It will engage with workers and industries facing displacement, and with the people and communities who feel the future bearing down on them but are unsure how to respond. What we learn will inform what the Institute studies, and how our company as a whole chooses to act.

The Anthropic Institute has made several founding hires:

  • Matt Botvinick, a Resident Fellow at Yale Law School and previously Senior Director of Research at Google DeepMind and Professor in Neural Computation at Princeton, is joining the Institute to lead its work on AI and the rule of law.
  • Anton Korinek is joining the Economic Research team, on leave from his role as Professor of Economics at the University of Virginia, to lead an effort studying how transformative AI could reshape the very nature of economic activity.
  • Zoë Hitzig, who previously studied AI’s social and economic impacts at OpenAI, is joining to connect our economics work to model training and development."

Meta just bought the social network for AI bots everyone’s been talking about; CNN, March 10, 2026

Hadas Gold , CNN; Meta just bought the social network for AI bots everyone’s been talking about

"Meta, the company behind some of the world’s most popular social media platforms, just scooped up a new site – for bots.

Meta has acquired Moltbook, the social media network where AI agents interact with one another autonomously, the company said in a statement on Tuesday.

Meta is competing with rivals like OpenAI for both talent and users’ attention. And as AI expands into more aspects of Americans’ lives, tech companies are trying to figure out the best way to position themselves to win what’s becoming a sort of technological arms race.

Moltbook became the talk of Silicon Valley last month, racking up millions of registered bots within days of its launch. Some in the industry saw it as a major leap because it demonstrated what can happen when AI agents socialize with one another like humans. Others said the site is full of sham agents, AI slop and security risks and should be viewed skeptically."

Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism; The Guardian, March 4, 2026

 , The Guardian ; Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism

"penAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time.

A grassroots boycott called QuitGPT has been spreading across the US and beyond, asking people to cancel their ChatGPT subscriptions. More than a million people have answered the call. Mark Ruffalo and Katy Perry have thrown their weight behind it. It is one of the most significant consumer boycotts in recent memory, and I believe it’s time for Europeans to join...

In contrast, cancelling ChatGPT is a piece of cake. You can do it in 10 seconds, and the alternatives are just as good or even better. History shows why #QuitGPT has so much potential: effective campaigns such as the 1977 Nestlé boycott and the 2023 Bud Light boycott were successful because they were narrow and easy. They had a clear target and people had lots of good alternatives.

The great boycotts of history did not succeed because millions of people suddenly became heroic activists. They succeeded because buying a different brand of coffee, or choosing a different beer, was something anyone could do on a Tuesday afternoon. The small act, repeated at scale, becomes a political earthquake.

Go to quitgpt.org. Cancel your subscription. Using the free version? Delete the app, because your conversations still feed the machine. Then try an alternative, and tell at least one person why.

OpenAI’s president bet $25m that you would not notice where your money was going, and that, even if you did, you would not care enough to spend 10 seconds switching to something else. Time to prove him wrong."