Showing posts with label AI chatbots. Show all posts
Showing posts with label AI chatbots. Show all posts

Friday, February 13, 2026

MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake; Deadline, February 12, 2026

Ted Johnson , Deadline; MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake

"As reported by Deadline’s Jake Kanter, Seedance 2.0 users are prompting the Chinese AI tool to create videos that appear to be repurposing, with startling accuracy, copyrighted material from studios, including Disney, Warner Bros Discovery and Paramount. In addition to the Cruise vs. Pitt fight, the model has produced remixes of Avengers: Endgame and a Friends scene in which Rachel and Joey are played by otters."

Monday, February 9, 2026

The New Fabio Is Claude; The New York Times, February 8, 2026

 , The New York Times; The New Fabio Is Claude

The romance industry, always at the vanguard of technological change, is rapidly adapting to A.I. Not everyone is on board.

"A longtime romance novelist who has been published by Harlequin and Mills & Boon, Ms. Hart was always a fast writer. Working on her own, she released 10 to 12 books a year under five pen names, on top of ghostwriting. But with the help of A.I., Ms. Hart can publish books at an astonishing rate. Last year, she produced more than 200 romance novels in a range of subgenres, from dark mafia romances to sweet teen stories, and self-published them on Amazon. None were huge blockbusters, but collectively, they sold around 50,000 copies, earning Ms. Hart six figures...

Ms. Hart has become an A.I. evangelist. Through her author-coaching business, Plot Prose, she’s taught more than 1,600 people how to produce a novel with artificial intelligence, she said. She’s rolling out her proprietary A.I. writing program, which can generate a book based on an outline in less than an hour, and costs between $80 and $250 a month.

But when it comes to her current pen names, Ms. Hart doesn’t disclose her use of A.I., because there’s still a strong stigma around the technology, she said. Coral Hart is one of her early, now retired pseudonyms, and it’s the name she uses to teach A.I.-assisted writing; she requested anonymity because she still uses her real name for some publishing and coaching projects. She fears that revealing her A.I. use would damage her business for that work.

But she predicts attitudes will soon change, and is adding three new pen names that will be openly A.I.-assisted, she said.

The way Ms. Hart sees it, romance writers must either embrace artificial intelligence, or get left behind...

The writer Elizabeth Ann West, one of Future Fiction’s founders, who came up with the plot of “Bridesmaids and Bourbon,” believes the audience would be bigger if the books weren’t labeled as A.I. The novels, which are available on Amazon, come with a disclaimer on their product page: “This story was produced using author‑directed AI tools.”

“If you hide that there’s A.I., it sells just fine,” she said."


Friday, February 6, 2026

Young people in China have a new alternative to marriage and babies: AI pets; The Washington Post, February 6, 2026

 

, The Washington Post; Young people in China have a new alternative to marriage and babies: AI pets

"While China and the United States vie for supremacy in the artificial intelligence race, China is pulling ahead when it comes to finding ways to apply AI tools to everyday uses — from administering local government and streamlining police work to warding off loneliness. People falling in love with chatbots has captured headlines in the U.S., and the AI pet craze in China adds a new, furry dimension to the evolving human relationship with AI."

Tuesday, February 3, 2026

X offices raided in France as UK opens fresh investigation into Grok; BBC, February 3, 2026

 Liv McMahon, BBC; X offices raided in France as UK opens fresh investigation into Grok

"The French offices of Elon Musk's X have been raided by the Paris prosecutor's cyber-crime unit, as part of an investigation into suspected offences including unlawful data extraction and complicity in the possession of child pornography.

The prosecutor's office also said both Musk and former X chief executive Linda Yaccarino had been summoned to appear at hearings in April.

In a separate development, the UK's Information Commissioner's Office (ICO) announced a probe into Musk's AI tool, Grok, over its "potential to produce harmful sexualised image and video content."

X is yet to respond to either investigation - the BBC has approached it for comment."

AI chatbots are not your friends, experts warn; Politico, February 3, 2026

 PIETER HAECK , Politico; AI chatbots are not your friends, experts warn

"Millions of people are forming emotional bonds with artificial intelligence chatbots — a problem that politicians need to take seriously, according to top scientists.

The warning of a rise in AI bots designed to develop a relationship with users comes in an assessment released Tuesday on the progress and risks of artificial intelligence."

Friday, January 23, 2026

Anthropic’s Claude AI gets a new constitution embedding safety and ethics; CIO, January 22, 2026

 , CIO; Anthropic’s Claude AI gets a new constitution embedding safety and ethics

"Anthropic has completely overhauled the “Claude constitution”, a document that sets out the ethical parameters governing its AI model’s reasoning and behavior.

Launched at the World Economic Forum’s Davos Summit, the new constitution’sprinciples are that Claude should be “broadly safe” (not undermining human oversight), “Broadly ethical” (honest, avoiding inappropriate, dangerous, or harmful actions), “genuinely helpful” (benefitting its users), as well as being “compliant with Anthropic’s guidelines”.

According to Anthropic, the constitution is already being used in Claude’s model training, making it fundamental to its process of reasoning.

Claude’s first constitution appeared in May 2023, a modest 2,700-word document that borrowed heavily and openly from the UN Universal Declaration of Human Rights and Apple’s terms of service.

While not completely abandoning those sources, the 2026 Claude constitution moves away from the focus on “standalone principles” in favor of a more philosophical approach based on understanding not simply what is important, but why.

“We’ve come to believe that a different approach is necessary. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize — to apply broad principles rather than mechanically following specific rules,” explained Anthropic."

Wednesday, January 21, 2026

They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.; The Washington Post, January 20, 2026

 , The Washington Post; They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.

"Artificial intelligence is supposed to make your work easier. But figuring out how to use it effectively can be a challenge.

Over the past several years, AI models have continued to evolve, with plenty of tools for specific tasks such as note-taking, coding and writing. Many workers spent last year experimenting with AI, applying various tools to see what actually worked. And as employers increasingly emphasize AI in their business, they’re also expecting workers to know how to use it...

The number of people using AI for work is growing, according to a recent poll by Gallup. The percentage of U.S. employees who used AI for their jobs at least a few times a year hit 45 percent in the third quarter of last year, up five percentage points from the previous quarter. The top use cases for AI, according to the poll, was to consolidate information, generate ideas and learn new things.

The Washington Post spoke to workers to learn how they’re getting the best use out of AI. Here are five of their best tips. A caveat: AI may not be suitable for all workers, so be sure to follow your company’s policy."

Friday, January 16, 2026

AI’S MEMORIZATION CRISIS: Large language models don’t “learn”—they copy. And that could change everything for the tech industry.; The Atlantic, January 9, 2026

 Alex Reisner, The Atlantic; AI’S MEMORIZATION CRISISLarge language models don’t “learn”—they copy. And that could change everything for the tech industry

"On tuesday, researchers at Stanford and Yale revealed something that AI companies would prefer to keep hidden. Four popular large language models—OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—have stored large portions of some of the books they’ve been trained on, and can reproduce long excerpts from those books."

Extracting books from production language models; Cornell University, January 6, 2026

Ahmed AhmedA. Feder CooperSanmi KoyejoPercy Liang, Cornell University; Extracting books from production language models

"Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model's weights during training, and whether those memorized data can be extracted in the model's outputs. While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models. However, it remains an open question if similar extraction is feasible for production LLMs, given the safety measures these systems implement. We investigate this question using a two-phase procedure: (1) an initial probe to test for extraction feasibility, which sometimes uses a Best-of-N (BoN) jailbreak, followed by (2) iterative continuation prompts to attempt to extract the book. We evaluate our procedure on four production LLMs -- Claude 3.7 Sonnet, GPT-4.1, Gemini 2.5 Pro, and Grok 3 -- and we measure extraction success with a score computed from a block-based approximation of longest common substring (nv-recall). With different per-LLM experimental configurations, we were able to extract varying amounts of text. For the Phase 1 probe, it was unnecessary to jailbreak Gemini 2.5 Pro and Grok 3 to extract text (e.g, nv-recall of 76.8% and 70.3%, respectively, for Harry Potter and the Sorcerer's Stone), while it was necessary for Claude 3.7 Sonnet and GPT-4.1. In some cases, jailbroken Claude 3.7 Sonnet outputs entire books near-verbatim (e.g., nv-recall=95.8%). GPT-4.1 requires significantly more BoN attempts (e.g., 20X), and eventually refuses to continue (e.g., nv-recall=4.0%). Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs."

Thursday, January 15, 2026

Hegseth wants to integrate Musk’s Grok AI into military networks this month; Ars Technica, January 13, 2026

 BENJ EDWARDS , Ars Technica; Hegseth wants to integrate Musk’s Grok AI into military networks this month

"On Monday, US Defense Secretary Pete Hegseth said he plans to integrate Elon Musk’s AI tool, Grok, into Pentagon networks later this month. During remarks at the SpaceX headquarters in Texas reported by The Guardian, Hegseth said the integration would place “the world’s leading AI models on every unclassified and classified network throughout our department.”

The announcement comes weeks after Grok drew international backlash for generating sexualized images of women and children, although the Department of Defense has not released official documentation confirming Hegseth’s announced timeline or implementation details."

Sunday, January 11, 2026

‘Add blood, forced smile’: how Grok’s nudification tool went viral; The Guardian, January 11, 2026

  and The Guardian; ‘Add blood, forced smile’: how Grok’s nudification tool went viral

"This unprecedented mainstreaming of nudification technology triggered instant outrage from the women affected, but it was days before regulators and politicians woke up to the enormity of the proliferating scandal. The public outcry raged for nine days before X made any substantive changes to stem the trend. By the time it acted, early on Friday morning, degrading, non-consensual manipulated pictures of countless women had already flooded the internet."

Sunday, December 28, 2025

What Parents in China See in A.I. Toys; The New York Times, December 25, 2025

 Jiawei Wang, The New York Times; What Parents in China See in A.I. Toys

"A video of a child crying over her broken A.I. chatbot stirred up conversation in China, with some viewers questioning whether the gadgets are good for children. But the girl’s father says it’s more than a toy; it’s a family member."

74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen; The Washington Post, December 27, 2025

  

, The Washington Post; 74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen

"The Raines’ lawsuit alleges that OpenAI caused Adam’s death by distributing ChatGPT to minors despite knowing it could encourage psychological dependency and suicidal ideation. His parents were the first of five families to file wrongful-death lawsuits against OpenAI in recent months, alleging that the world’s most popular chatbot had encouraged their loved ones to kill themselves. A sixth suit filed this month alleges that ChatGPT led a man to kill his mother before taking his own life.

None of the cases have yet reached trial, and the full conversations users had with ChatGPT in the weeks and months before they died are not public. But in response to requests from The Post, the Raine family’s attorneys shared analysis of Adam’s account that allowed reporters to chart the escalation of one teenager’s relationship with ChatGPT during a mental health crisis."

Tuesday, December 23, 2025

Vince Gilligan Talks About His Four-Season Plan for 'Pluribus' (And Why He's Done With 'Breaking Bad'); Esquire, December 23, 2025

  , Esquire; Vince Gilligan Talks About His Four-Season Plan for 'Pluribus' (And Why He's Done With 'Breaking Bad')

"How many times have you been asked whether the show is about AI?

I’ve been asked a fair bit about AI. It’s interesting because I came up with this story going on ten years ago, and this was before the advent of ChatGPT. So I can’t say I was thinking about this current thing they call AI, which, by the way, feels like a marketing tool to me, because there’s no intelligence there. It’s a really amazing bit of sleight of hand that makes it look like the act of creation is occurring, but really it’s just taking little bits and pieces from a hundred other sources and cobbling them together. There’s no consciousness there. I personally am not a big fan of what passes for AI now. I don’t wish to see it take over the world. I don’t wish to see it subvert the creative process for human beings. But in full disclosure, I was not thinking about it specifically when I came up with this.

Even so, when AI entered the mainstream conversation, you must have seen the resonance.

Yeah. When ChatGPT came out, I was basically appalled. But yeah, I probably was thinking, wow, maybe there’s some resonance with this show...

Breaking Bad famously went from the brink of cancellation to being hailed as one of the greatest television series of all time. Did that experience change how you approached making Pluribus?

It allowed us to make it. It really did. People have asked me recently, are you proud of the fact that you got an original show, a non IP-derived show on the air? And I say: I am proud of that, and I feel lucky, but it also makes me sad. Because I think, why is it so hard to get a show that is not based on pre-existing intellectual property made?"

Tuesday, November 18, 2025

OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert; Bloomberg Law, November 18, 2025

 

Aruni Soni, Bloomberg Law; OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert

"OpenAI Inc. is banking on a privacy argument to block a court’s probe into millions of ChatGPT user conversations. 

That hasn’t worked so far as a winning legal strategy that can be used by other chatbot makers anticipating similar discovery demands in exploding chatbot-related litigation.

Instead, it threatens to turn attention to just how much information chatbots like ChatGPT are collecting and retaining about their users."

Monday, November 17, 2025

Inside the old church where one trillion webpages are being saved; CNN, November 16, 2025

  , CNN; Inside the old church where one trillion webpages are being saved

"The Wayback Machine, a tool used by millions every day, has proven critical for academics and journalists searching for historical information on what corporations, people and governments have published online in the past, long after their websites have been updated or changed.

For many, the Wayback Machine is like a living history of the internet, and it just logged its trillionth page last month.

Archiving the web is more important and more challenging than ever before. The White House in January ordered vast amounts of government webpages to be taken down. Meanwhile, artificial intelligence is blurring the line between what’s real and what’s artificially generated — in some ways replacing the need to visit websites entirely. And more of the internet is now hidden behind paywalls or tucked in conversations with AI chatbots.

It’s the Internet Archive’s job to figure out how to preserve it all."

Saturday, November 15, 2025

We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for.; The Washington Post, November 12, 2025

 

, The Washington Post; We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for.

 OpenAI has largely promoted ChatGPT as a productivity tool, and in many conversations users asked for help with practical tasks such as retrieving information. But in more than 1 in 10 of the chats The Post analyzed, people engaged the chatbot in abstract discussions, musing on topics like their ideas for breakthrough medical treatments or personal beliefs about the nature of reality.

Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work. (The Post has a content partnership with OpenAI.)...

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”"

Friday, November 14, 2025

Who Pays When A.I. Is Wrong?; The New York Times, November 12, 2025

 , The New York Times; Who Pays When A.I. Is Wrong?

"Search results that Gemini, Google’s artificial intelligence technology, delivered at the top of the page included the falsehoods. And mentions of a legal settlement populated automatically when they typed “Wolf River Electric” in the search box.

With cancellations piling up and their attempts to use Google’s tools to correct the issues proving fruitless, Wolf River executives decided they had no choice but to sue the tech giant for defamation.

“We put a lot of time and energy into building up a good name,” said Justin Nielsen, who founded Wolf River with three of his best friends in 2014 and helped it grow into the state’s largest solar contractor. “When customers see a red flag like that, it’s damn near impossible to win them back.”

Theirs is one of at least six defamation cases filed in the United States in the past two years over content produced by A.I. tools that generate text and images. They argue that the cutting-edge technology not only created and published false, damaging information about individuals or groups but, in many cases, continued putting it out even after the companies that built and profit from the A.I. models were made aware of the problem.

Unlike other libel or slander suits, these cases seek to define content that was not created by human beings as defamatory — a novel concept that has captivated some legal experts."

Wednesday, November 12, 2025

Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings; The New York Times, November 7, 2025

 , The New York Times; Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings

"Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it.

While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate.

But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes.

“These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.”...

The problem, though, keeps getting worse.

That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day.

Many lawyers, including Mr. Freund and Mr. Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.”

Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers."

Tuesday, October 28, 2025

Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users; Santa Clara University, Friday, November 7, 2025 12 Noon PST, 3 PM EST

 Santa Clara University ; Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users

"A number of recent articles, in The New York Times and elsewhere, have described the experience of “chatbot psychosis” that some people develop as they interact with services like ChatGPT. What do we know about chatbot psychosis? Is there a trend of such psychosis at scale? What do you learn if you sift through over one million words comprising one such experience? And what are some practical steps that companies can take to protect their users and reduce the risk of such episodes?

A computer scientist with a background in economics, Steven Adler started to focus on AI risk topics (and AI broadly) a little over a decade ago, and worked at OpenAI from late 2020 through 2024, leading various safety-related research projects and products there. He now writes about what’s happening in AI safety–and argues that safety and technological progress can very much complement each other, and in fact require each other, if the goal is to unlock the uses of AI that people want."