Showing posts with label AI slop. Show all posts
Showing posts with label AI slop. Show all posts

Wednesday, November 12, 2025

Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings; The New York Times, November 7, 2025

 , The New York Times; Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings

"Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it.

While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate.

But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes.

“These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.”...

The problem, though, keeps getting worse.

That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day.

Many lawyers, including Mr. Freund and Mr. Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.”

Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers."

Wednesday, November 5, 2025

Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI; ZME Science, November 4, 2025

 Tudor Tarita , ZME Science; Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI


[Kip Currier: This is a troubling, eye-opening report by Originality.ai on AI-generated books proliferating on Amazon in the sub-area of "herbal remedies". As a ZME Science article on the report suggests, if this is the state of herbal books on the world's largest bookseller platform, what is the state of other book areas and genres?

The lack of transparency and authenticity vis-a-vis AI-generated books is deeply concerning. If a potential book buyer knows that a book is principally or wholly "authored" by AI and that person still elects to purchase that book with that knowledge, that's their choice. But, as the Originality.ai report identifies, potential book buyers are being presented with fake author names on AI-generated books and are not being informed by the purveyors of AI-generated books, or the platforms that make those books accessible for purchase, that those works are not written by human experts and authors. That is deceptive business practice and consumer fraud.

Consumers should have the right to know material information about all products in the marketplace. No one would countenance (except for bad actors) children's toys deceptively containing harmful lead or dog and cat treats made with substances that can cause harm or death. Why should consumers not be concerned in similar fashion about books that purport to be created by human experts but which may contain information that can cause harm and even death in some cases? 

Myriad ethical and legal questions are implicated, such as:

  • What are the potential harms of AI-generated books that falsely pose as human authors?
  • What responsibility do platforms like Amazon have for fake products?
  • What responsibility do platforms like Amazon have for AI-generated books?
  • What do you as a consumer want to know about books that are available for purchase on platforms like Amazon?
  • What are the potential short-term and long-term implications of AI-generated books posing as human authors for consumers, authors, publishers, and societies?]


[Excerpt]

"At the top of Amazon’s “Herbal Remedies” bestseller list, The Natural Healing Handbook looked like a typical wellness guide. With leafy cover art and promises of “ancient wisdom” and “self-healing,” it seemed like a harmless book for health-conscious readers.

But “Luna Filby”, the Australian herbalist credited with writing the book, doesn’t exist.

A new investigation from Originality.ai, a company that develops tools to detect AI-generated writing, reveals that The Natural Healing Handbook and hundreds of similar titles were likely produced by artificial intelligence. The company scanned 558 paperback titles published in Amazon’s “Herbal Remedies” subcategory in 2025 and found that 82% were likely written by AI.

“We inputted Luna’s author biography, book summary, and any available sample pages,” the report states. “All came back flagged as likely AI-generated with 100% confidence.

A Forest of Fakes

It’s become hard (sometimes, almost impossible) to distinguish whether something is written by AI. So there’s often a sliver of a doubt. But according to the report, The Natural Healing Handbook is part of a sprawling canopy of probable AI-generated books. Many of them are climbing Amazon’s rankings, often outselling work by real writers...

Where This Leaves Us

AI is flooding niches that once relied on careful expertise and centuries of accumulated knowledge. Real writers are being drowned out by machines regurgitating fragments of folklore scraped from the internet.

“This is a damning revelation of the sheer scope of unlabeled, unverified, unchecked, likely AI content that has completely invaded [Amazon’s] platform,” wrote Michael Fraiman, author of the Originality.ai report.

The report looked at herbal books, but there’s likely many other niches hidden

Amazon’s publishing model allows self-published authors to flood categories for profit. And now, AI tools make it easier than ever to generate convincing, although hollow, manuscripts. Every new “Luna Filby” who hits #1 proves that the model still works.

Unless something changes, we may be witnessing the quiet corrosion of trust in consumer publishing."

Saturday, October 11, 2025

OpenAI’s Sora Is in Serious Trouble; Futurism, October 10, 2025

, Futurism ; OpenAI’s Sora Is in Serious Trouble

"The cat was already out of the bag, though, sparking what’s likely to be immense legal drama for OpenAI. On Monday, the Motion Picture Association, a US trade association that represents major film studios, released a scorching statementurging OpenAI to “take immediate and decisive action” to stop the app from infringing on copyrighted media.

Meanwhile, OpenAI appears to have come down hard on what kind of text prompts can be turned into AI slop on Sora, implementing sweeping new guardrails presumably meant to appease furious rightsholders and protect their intellectual property.

As a result, power users experienced major whiplash that’s tarnishing the launch’s image even among fans. It’s a lose-lose moment for OpenAI’s flashy new app — either aggravate rightsholders by allowing mass copyright infringement, or turn it into yet another mind-numbing screensaver-generating experience like Meta’s widely mocked Vibes.

“It’s official, Sora 2 is completely boring and useless with these copyright restrictions. Some videos should be considered fair use,” one Reddit user lamented.

Others accused OpenAI of abusing copyright to hype up its new app...

How OpenAI’s eyebrow-raising ask-for-forgiveness-later approach to copyright will play out in the long term remains to be seen. For one, the company may already be in hot water, as major Hollywood studios have already started suing over less."

Sunday, October 5, 2025

Fraud, AI slop and huge profits: is science publishing broken? – podcast; The Guardian, October 2, 2025

 Presented and produced by with Sound design by , the executive producer was ; Fraud, AI slop and huge profits: is science publishing broken? – podcast

"Scientists are warning that academic publishing needs urgent reform in order to retain trust in the research system. Ian Sample tells Madeleine Finlay what has gone so wrong, and Dr Mark Hanson of the University of Exeter proposes some potential solutions

Quality of scientific papers questioned as academics ‘overwhelmed’ by the millions published

Is the staggeringly profitable business of scientific publishing bad for science?

This podcast was amended on 2 October 2025 to include information about how AI is being used to spot low quality papers."

Wednesday, September 24, 2025

What is AI slop? A technologist explains this new and largely unwelcome form of online content; The Conversation, September 2, 2025

 Assistant Provost for Innovations in Learning, Teaching, and Technology, Quinnipiac University, The Conversation ; What is AI slop? A technologist explains this new and largely unwelcome form of online content


"You’ve probably encountered images in your social media feeds that look like a cross between photographs and computer-generated graphics. Some are fantastical – think Shrimp Jesus – and some are believable at a quick glance – remember the little girl clutching a puppy in a boat during a flood? 

These are examples of AI slop, low- to mid-quality content – video, images, audio, text or a mix – created with AI tools, often with little regard for accuracy. It’s fast, easy and inexpensive to make this content. AI slop producers typically place it on social media to exploit the economics of attention on the internet, displacing higher-quality material that could be more helpful.

AI slop has been increasing over the past few years. As the term “slop” indicates, that’s generally not good for people using the internet...

Harms of AI slop

AI-driven slop is making its way upstream into people’s media diets as well. During Hurricane Helene, opponents of President Joe Biden cited AI-generated images of a displaced child clutching a puppy as evidence of the administration’s purported mishandling of the disaster response. Even when it’s apparent that content is AI-generated, it can still be used to spread misinformation by fooling some people who briefly glance at it.

AI slop also harms artists by causing job and financial losses and crowding out content made by real creators."

Tuesday, September 16, 2025

AI will make the rich unfathomably richer. Is this really what we want?; The Guardian, September 16, 2025

  , The Guardian; AI will make the rich unfathomably richer. Is this really what we want?

"Socially, the great gains of the knowledge economy have also failed to live up to their promises. With instantaneous global connectivity, we were promised cultural excellence and social effervescence. Instead, we’ve been delivered an endless scroll of slop. Smartphone addictions have made us more vicious, bitter and boring. Social media has made us narcissistic. Our attention spans have been zapped by the constant, pathological need to check our notifications. In the built environment, the omnipresence of touchscreen kiosks has removed even the slightest possibility of social interaction. Instead of having conversations with strangers, we now only interact with screens. All of this has made us more lonely and less happy. As a cure, we’re now offered AI companions, which have the unfortunate side effect of occasionally inducing psychotic breaks. Do we really need any more of this?"