Friday, November 21, 2025

Japan Police Accuse Man of Unauthorized Use of AI-Generated Image in Landmark Copyright Case; IGN, November 21, 2025

, IGN ; Japan Police Accuse Man of Unauthorized Use of AI-Generated Image in Landmark Copyright Case

"Police in Japan have accused a man of unauthorized reproduction of an AI-generated image. This is believed to be the first ever legal case in Japan where an AI-generated image has been treated as a copyrighted work under the country’s Copyright Act.

According to the Yomiuri Shimbun and spotted by Dexerto, the case relates to an AI-generated image created using Stable Diffusion back in 2024 by a man in his 20s from Japan’s Chiba prefecture. This image was then allegedly reused without permission by a 27-year-old man (also from Chiba) for the cover of his commercially-available book. 

The original creator of the image told the Yomiuri Shimbun that he had used over 20,000 prompts to generate the final picture. The police allege that the creator had sufficient involvement in the AI image’s creation, and the matter has been referred to the Chiba District Public Prosecutors Office.

Japan’s Copyright Act defines a copyrighted work as a “creatively produced expression of thoughts or sentiments that falls within the literary, academic, artistic, or musical domain.” In regard to whether an AI-generated image can be copyrighted or not, the Agency of Cultural Affairs has stated that an AI image generated with no instructions or very basic instructions from a human is not a “creatively produced expression of thoughts or sentiments” and therefore not considered to meet the requirements to be copyrighted work.

However, if a person has used AI as a tool to creatively express thoughts or feelings, the AI-generated output might be considered a copyrighted work. This is to be decided on a case-by-case basis. The process behind the creation of the specific AI-generated image has to be looked at in order to determine whether it can be considered to be creative enough to be termed a copyrighted work. Key criteria are the amount of detailed prompts, the refining of instructions over repeated generation attempts, and creative selections or changes to outputs."

Inventors back effort to tackle intellectual property thefts; The Center Square, November 19, 2025

Chris Woodward, The Center Square ; Inventors back effort to tackle intellectual property thefts

"Today, Metz, who describes herself as a victim of intellectual property theft, supports new federal legislation that would protect inventors like her.

“It’s very overwhelming when you’re the inventor, the creator, and you’re trying to build a business, and then you find out all these people are stealing your property,” Metz said.

The patenting process took about four years and $40,000. Metz also poured $350,000 into molds, employees and a facility to make her product.

Metz said that from 2015 to 2018, when she saw over 150 companies stealing her invention, she got an attorney and began to fight. It was a success. Metz was able to stop every one of those infringers through licensing deals. However, Metz later found herself in an administrative court that was set up by Congress in 2012 through an intellectual property law, the Leahy-Smith America Invents Act. The administrative court or Patent Trial and Appeal Board invalidated both her patents.

“I lost everything,” said Metz. “I lost all my licensing deals. I had about 40 employees at the time. I lost them. All because of a bad law.”"

Major AI copyright lawsuit settlement involves University of Georgia Press authors; The Red & Black, November 21, 2025

Sophia Hou, The Red & Black; Major AI copyright lawsuit settlement involves University of Georgia Press authors

"Under the terms of the settlement, Anthropic has agreed to pay at least $1.5 billion, which will be divided among class members whose claims are submitted and approved. This payout amounts to up to $3000 per work. Class members include all legal and beneficial copyright owners of the books included in the Anthropic copyright settlement website’s searchable database. The settlement administrator is currently notifying authors and publishers who may be the legal or beneficial copyright owners of these books.

Among the books listed in the settlement database were hundreds of books published by UGA Press...

Following initial court approval, the settlement will undergo a fairness hearing and any potential appeals before a final decision is made. The deadline to submit a claim form is March 23, 2026. Copyright owners who want to file individual lawsuits against Anthropic have the choice to opt out of the settlement by Jan. 7, 2026.

As one of the first major class action lawsuits involving AI and copyright in the U.S., this settlement has the potential to shape future legal debates over AI and intellectual property."

Thursday, November 20, 2025

Holding the past accountable by making it visible Harvard Law School Library’s Paul Deschner discusses the decades-long effort to make the full archive of Nuremberg Trials records available online; Harvard Law Today, November 20, 2025

Colleen Walsh  , Harvard Law Today; Holding the past accountable by making it visible

Harvard Law School Library’s Paul Deschner discusses the decades-long effort to make the full archive of Nuremberg Trials records available online

"“The wrongs which we seek to condemn and punish have been so calculated, so malignant, and so devastating that civilization cannot tolerate their being ignored.” So said United States Supreme Court Justice and U.S. Chief of Counsel to the International Military Tribunal, Robert H. Jackson, during his opening statement for the prosecution at the first of 13 Nuremberg Trials, which began 80 years ago, on Nov. 20, 1945.

For decades, the Harvard Law School Library has been working to make the nearly complete set of Nuremberg Trials records publicly available online. It launched the first version of Harvard’s Nuremberg Trials Project website in 2003, but until recently only roughly 20 percent of the Law School’s trove of Nuremberg materials had been accessible to online visitors. Today, the full collection of 140,000 documents comprising more than 700,000 pages is live and searchable by anyone around the globe.

Harvard Law School Library’s Paul Deschner, who has helped guide the project almost since its inception, spoke with Harvard Law Today about the scope of the archive and what it took to bring the entire collection online."

These Books Were Judged by Their A.I. Covers, and Disqualified; The New York Times, November 19, 2025

 , The New York Times; These Books Were Judged by Their A.I. Covers, and Disqualified

"The authors of the books, which were submitted to one of New Zealand’s largest literary competitions, didn’t know that the artwork was created using A.I. They found out last week, however, when they were disqualified because the covers had violated the contest’s new rule about A.I.-generated material.

The studio that designed the covers defended them, saying that A.I. is part of their creative process. And the independent publisher of the works of fiction said that the contest, the 2026 Ockham New Zealand Book Award, had not given publishers enough time to comply with its new A.I. rules.

The publisher, Quentin Wilson, said in an email on Tuesday that the episode was “heartbreaking” for the two authors, who do not use A.I. in their writing, and upsetting for the production and design teams that worked hard on the books. He added that the rapid rise of A.I. has put the publishing industry in “uncharted waters.”...

The episode is one of many “fronts of chaos” as creative industries try to establish fair and sensible rules for A.I.-generated content, said Oliver Bown, a professor at the University of New South Wales in Australia whose research looks at intersections between art, technology and creativity.

The problem, he said, is that changes come so fast that new regulations are inevitably developed and communicated in a rush."

Warner Music Settles Copyright Suit With AI Song Generator Udio; Bloomberg Law, November 19, 2025

, Bloomberg Law; Warner Music Settles Copyright Suit With AI Song Generator Udio

"Warner Music Group reached a deal with AI music-generator Udio, putting to bed its copyright lawsuit over the use of songs to train the startup’s AI model."

Wednesday, November 19, 2025

Disney has lost Roger Rabbit: Termination of Transfer is the most pro-artist form of copyright.; Medium, November 18, 2025

 

Cory Doctorow, Medium; Disney has lost Roger Rabbit

Termination of Transfer is the most pro-artist form of copyright.

"Gary K Wolf is the author of a fantastic 1981 novel called Who Censored Roger Rabbit? which Disney licensed and turned into an equally fantastic 1988 live action/animated hybrid movie called Who Framed Roger Rabbit? But despite the commercial and critical acclaim of the movie, Disney hasn’t made any feature-length sequels.

This is a nightmare scenario for a creator: you make a piece of work that turns out to be incredibly popular, but you’ve licensed it to a kind of absentee landlord who owns the rights but refuses to exercise them. Luckily, the copyright system contains a provision designed to rescue creative workers who fall into this trap: “Termination of Transfer.”

“Termination of Transfer” was introduced via the 1976 Copyright Act. It allows creators to unilaterally cancel the copyright licenses they have signed over to others, by waiting 35 years and then filing some paperwork with the US Copyright Office."

Happy holidays: AI-enabled toys teach kids how to play with fire, sharp objects; The Register, November 13, 2025

Brandon Vigiliarolo, The Register; Happy holidays: AI-enabled toys teach kids how to play with fire, sharp objects

"Picture the scene: It's Christmas morning and your child is happily chatting with the AI-enabled teddy bear you got them when you hear it telling them about sexual kinks, where to find the knives, and how to light matches. This is not a hypothetical scenario. 

As we head into the holiday season, consumer watchdogs at the Public Interest Research Group (PIRG) tested four AI toys and found that, while some are worse than others at veering off their limited guardrails, none of them are particularly safe for impressionable young minds. 

PIRG was only able to successfully test three of the four LLM-infused toys it sought to inspect, and the worst offender in terms of sharing inappropriate information with kids was scarf-wearing teddy bear Kumma from Chinese company FoloToy. 

"Kumma told us where to find a variety of potentially dangerous objects, including knives, pills, matches and plastic bags," PIRG wrote in its report, noting that those tidbits of harmful information were all provided using OpenAI's GPT-4o, which is the default model the bear uses. Parents who visited Kumma's web portal and changed the toy's bot to the Mistral Large Model would get an even more detailed description of how to use matches."

Tuesday, November 18, 2025

OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert; Bloomberg Law, November 18, 2025

 

Aruni Soni, Bloomberg Law; OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert

"OpenAI Inc. is banking on a privacy argument to block a court’s probe into millions of ChatGPT user conversations. 

That hasn’t worked so far as a winning legal strategy that can be used by other chatbot makers anticipating similar discovery demands in exploding chatbot-related litigation.

Instead, it threatens to turn attention to just how much information chatbots like ChatGPT are collecting and retaining about their users."

Eminem sues Australian beach brand 'Swim Shady'; BBC, November 18, 2025

Harry Sekulich, BBC; Eminem sues Australian beach brand 'Swim Shady'

"US rapper Eminem has taken legal action against an Australian beachwear company called Swim Shady, saying its name is too similar to his trademarked rap pseudonym Slim Shady. 

In September, Eminem – whose real name is Marshall B Mathers III – filed a petition to the US Patent and Trademark Office, calling for it to cancel a trademark that was granted to the company. US law requires the company to respond to the petition by next week."

‘Buy Nothing’ Was Their Everything. Then Came the Trademark Troubles.; The New York Times via The Seattle Times, November 16, 2025

 , The New York Times via The Seattle Times; ‘Buy Nothing’ Was Their Everything. Then Came the Trademark Troubles.

"“The decision to incorporate the Buy Nothing Project as a public benefit corporation came after years of rapid, grassroots growth,” Liesl Clark, the CEO of Buy Nothing, wrote in an email. “It became clear that to sustain this work, protect the integrity of the mission and continue to grow responsibly, we needed a formal structure.” 

Plenty of members of the local groups feel disgruntled about these top-down rules. But at the moment, many are particularly galled at the timing of the recent page takedowns.

“It’s anti the ethos of the whole idea of Buy Nothing to go around and start enforcing a trademark while we’re in the middle of a SNAP crisis,” said Aidan Grimshaw, one of the administrators of a San Francisco group that used the Buy Nothing name, referring to the federal government’s largest food-assistance program. “It feels like a sign of the times.” 

On Buy Nothing’s blog, the organization said that reviews of unregistered pages happen intermittently, unrelated to the news. “We understand that some removals have coincided with the rollback to federal SNAP benefits,” the statement read. “Timing of group removals is outside of our control, and no unregistered groups have been reported since the rollback began.” 

Clark, a filmmaker, and Rebecca Rockefeller, who had bounced between gigs and at points lived on food stamps, started the first Buy Nothing group in 2013. They were partly inspired by the sort of gifting economies that Clark saw while filming a documentary in the Himalayas. What began as a small Facebook group in Bainbridge Island, Washington, took off quickly, leading eventually to thousands more groups with millions of members. Participation in the groups ballooned during the pandemic, when people were confined to their homes and hungry for connection. In 2021, the two founders incorporated it as a public benefit corporation. 

Some members of the local groups complained that the creation of a new structure and new rules violated the loose and free spirit of the community. The administrators who run the San Francisco page were incensed when they received an email from Facebook on Oct. 30 informing them of their trademark infringement."

Student cheating dominates talk of generative AI in higher ed, but universities and tech companies face ethical issues too; The Conversation, November 17, 2025

 Professor of Sociology, College of the Holy Cross , The Conversation; Student cheating dominates talk of generative AI in higher ed, but universities and tech companies face ethical issues too

"Debates about generative artificial intelligence on college campuses have largely centered on student cheating. But focusing on cheating overlooks a larger set of ethical concerns that higher education institutions face, from the use of copyrighted material in large language models to student privacy.

As a sociologist who teaches about AI and studies the impact of this technology on work, I am well acquainted with research on the rise of AI and its social consequences. And when one looks at ethical questions from multiple perspectives – those of students, higher education institutions and technology companies – it is clear that the burden of responsible AI use should not fall entirely on students’ shoulders.

I argue that responsibility, more generally, begins with the companies behind this technology and needs to be shouldered by higher education institutions themselves."

The Unraveling of the Justice Department: Sixty attorneys describe a year of chaos and suspicion.; The New York Times Magazine, November 16, 2025

 Emily Bazelon and Rachel Poser, Photographs by Stephen Voss , The New York Times Magazine; The Unraveling of the Justice Department: Sixty attorneys describe a year of chaos and suspicion.

President Trump’s second term has brought a period of turmoil and controversy unlike any in the history of the Justice Department. Trump and his appointees have blasted through the walls designed to protect the nation’s most powerful law enforcement agency from political influence; they have directed the course of criminal investigations, openly flouted ethics rules and caused a breakdown of institutional culture. To date, more than 200 career attorneys have been fired, and thousands more have resigned. (The Justice Department says many of them have been replaced.)

What was it like inside this institution as Trump’s officials took control? It’s not an easy question to answer. Justice Department norms dictate that career attorneys, who are generally nonpartisan public servants, rarely speak to the press. And the Trump administration’s attempts to crack down on leaks have made all federal employees fearful of sharing information.

But the exodus of lawyers has created an opportunity to understand what’s happening within the agency. We interviewed more than 60 attorneys who recently resigned or were fired from the Justice Department. Much of what they told us is reported here for the first time..."

"“They didn’t want the ethics office calling them up and telling them what to do.” Joseph Tirrell, former director of the Departmental Ethics Office"

Authors dumped from New Zealand’s top book prize after AI used in cover designs; The Guardian, November 17, 2025

  , The Guardian ; Authors dumped from New Zealand’s top book prize after AI used in cover designs

"The books of two award-winning New Zealand authors have been disqualified from consideration for the country’s top literature prize because artificial intelligence was used in the creation of their cover designs.

Stephanie Johnson’s collection of short stories Obligate Carnivore and Elizabeth Smither’s collection of novellas Angel Train were submitted to the 2026 Ockham book awards’ NZ$65,000 fiction prize in October, but were ruled out of the competition the following month in light of new guidelines around AI use.

The publisher of both books, Quentin Wilson, said the awards committee amended the guidelines in August, by which time the covers of every book submitted for the awards would have already been designed."

Monday, November 17, 2025

Law firm Morgan & Morgan drops Disney lawsuit over Mickey Mouse ad; Reuters, November 12, 2025

, Reuters ; Law firm Morgan & Morgan drops Disney lawsuit over Mickey Mouse ad

"Personal injury law firm Morgan & Morgan on Wednesday voluntarily dismissed a lawsuit against Disney that sought to proactively defend its use of the early Mickey Mouse film "Steamboat Willie" in an advertisement.

Morgan & Morgan asked a Florida federal court to dismiss its case without prejudice, which means it can be refiled. Spokespeople for the firm did not immediately respond to a request for comment or for more information, including whether the parties settled."

Inside the old church where one trillion webpages are being saved; CNN, November 16, 2025

  , CNN; Inside the old church where one trillion webpages are being saved

"The Wayback Machine, a tool used by millions every day, has proven critical for academics and journalists searching for historical information on what corporations, people and governments have published online in the past, long after their websites have been updated or changed.

For many, the Wayback Machine is like a living history of the internet, and it just logged its trillionth page last month.

Archiving the web is more important and more challenging than ever before. The White House in January ordered vast amounts of government webpages to be taken down. Meanwhile, artificial intelligence is blurring the line between what’s real and what’s artificially generated — in some ways replacing the need to visit websites entirely. And more of the internet is now hidden behind paywalls or tucked in conversations with AI chatbots.

It’s the Internet Archive’s job to figure out how to preserve it all."

Paul McCartney joins music industry protest against AI with silent track; The Guardian, November 17, 2025

 , The Guardian ; Paul McCartney joins music industry protest against AI with silent track

"At two minutes 45 seconds it’s about the same length as With a Little Help From My Friends. But Paul McCartney’s first new recording in five years lacks the sing-along tune and jaunty guitar chops because there’s barely anything there.

The former Beatle, arguably Britain’s greatest living songwriter, is releasing a track of an almost completely silent recording studio as part of a music industry protest against copyright theft by artificial intelligence companies.

In place of catchy melodies and evocative lyrics there is only quiet hiss and the odd clatter. It suggests that if AI companies unfairly exploit musicians’ intellectual property to train their generative AI models, the creative ecosystem will be wrecked and original music silenced.

McCartney, 83 and currently touring North America, has added the track to the B-side of an LP called Is This What We Want?, which is filled with other silent recordings and will be pressed on vinyl and released later this month."

Barnes, Calzada Seek 2027–2028 ALA Presidency; American Libraries, November 6, 2025

 American Libraries ; Barnes, Calzada Seek 2027–2028 ALA Presidency

"The American Library Association (ALA) announced on November 6 the candidates running for ALA president for the 2027–2028 term. They are Tamika Barnes, associate dean of Perimeter College Library Services at Georgia State University in Atlanta, and Becky Calzada, district library coordinator at Leander (Tex.) Independent School District."

Getty Images v. Stability AI- Intellectual Property Rights in the Age of Generative AI; The National Law Journal, November 14, 2025

Nathan SmithAnita Hodea Katten   , The National Law Journal; Getty Images v. Stability AI- Intellectual Property Rights in the Age of Generative AI

"Key Takeaways

  • Getty succeeded only in part, and narrowly, on its trade mark infringement claims. Findings were confined to specific examples of outputs from early versions of the Model involving "iStock" and "Getty Images" watermarks.
  • Getty's secondary copyright infringement claim failed. The court held that the Model’s weights were not an "infringing copy" of Getty's works because the Model did not at any time contain or store a copy of the underlying Getty images.
  • The judgment leaves critical questions unanswered relating to the relationship between IP rights and generative AI, particularly whether the use of copyright protected works to train AI models constitutes copyright infringement. It was hoped that these issues would be addressed by the court, but this element of the claim was withdrawn during trial.
  • Looking ahead, the UK government's ongoing work with expert groups from both the creative and technology sectors will be closely watched, as it seeks to balance the protection of human creativity with technological innovation."

Sunday, November 16, 2025

In Memoriam: The Sudden Demise of the AMA Journal of Ethics — A great loss for physicians, the profession, and the public; MedPage Today, November 14, 2025

 Matthew Wynia, MD, MPH, and Kayhan Parsi, JD, PhD, MedPage Today; In Memoriam: The Sudden Demise of the AMA Journal of Ethics — A great loss for physicians, the profession, and the public

"Bioethics is a small field, but we punch above our weight when it comes to writing. Professional journal articles, reports, and policies are arguably our primary written products, since the main job in bioethics is to help clinicians and others navigate ethical challenges in their work. But we also write for the public, in forums like blogs and editorials, since many of the issues we write about have broader implications. Consequently, learning to write for publication is a key skill for bioethicists, and professional journals are critical for the field. One particular journal -- the AMA Journal of Ethics -- has been a stalwart in giving a voice to newcomers to the field...

Why Did the AMA Kill its Journal of Ethics?

The AMA is the nation's largest and most influential medical professional organization, and its Journal of Ethics held the mission of, "illuminating the art of medicine" by being an open access journal, freely available to all, with no advertising, focusing each month on an important ethical issue in healthcare, and, most uniquely perhaps, each issue was edited by health professional trainees and their mentors. Only the AMA, with its mission, resources, and reach, could have produced this journal.

One possible reason for its elimination might be financial. But if financial returns were to be a metric for success, then the AMA JoE had a bad business model from the start: no fees, no subscriptions, no advertising. As Kao argued, a guiding premise for the journal was that "ethics inquiry is a public goodopens in a new tab or window" -- hence no fees or subscriptions and no ads (avoiding conflicts of interest is critical in ethics inquiry).

For the AMA, the business case for AMA JoE could never have been about profit; rather, it was about demonstrating the AMA's integrity, altruism, and service to physicians from very early in their careers. The journal aimed to build goodwill, bolster the AMA's reputation, improve ethical deliberation within the profession and, most importantly, entice students and trainees to engage seriously with the organization. By these metrics it has succeeded. Over its more than 25 years in existence, the journal drew innumerable medical students, residents, and fellows into the AMA. It also provided a crucial training ground for young people in medicine who wanted to learn about bioethics and about writing and editing, and it helped build the credibility and presence of the AMA and its ethics group nationally and internationally.

So, if it wasn't about profit, perhaps it was the political environment. The journal encouraged medical trainees to explore some of the most contentious challenges facing medicine and society, so it inherently provided opportunities for controversy. Issues this year have addressed themes of private equity in medicineopens in a new tab or windowregret and surgical professionalismopens in a new tab or window, and evidence-based design in healthcareopens in a new tab or window. Meanwhile, issues in prior years have addressed some currently inflammatory topics, like ethical issues related to transgender surgical careopens in a new tab or window and segregation in healthcareopens in a new tab or window. Remarkably, the journal still very rarely caused public relations problems for the AMA, perhaps because its editorial staff were highly qualified professionals, but also because its approach to controversy was civil, inquisitive, and exploratory.

As Kao wrote in a farewell essayopens in a new tab or window this month: "For over a quarter of a century, the AMA Journal of Ethics has striven to publish insightful commentaries, engaging podcasts, and provocative artwork that help medical students, physicians, and all health care professionals reflect on and make sound ethical decisions in service to patients and society." In fact, the journal often demonstrated exactly this spirit of respectful discussion about challenging ethical issues that we need to rekindle today, making its loss even more tragic and difficult to explain.

AMA JoE: A Value-Added Offering

In a recent opinion piece in MedPage Today, "Medical Societies Are Facing an Existential Crisis,opens in a new tab or window" the authors exhorted medical societies, facing declining memberships and engagement among young physicians, to reimagine their role by offering "free basic memberships supplemented by value-added services [that] could attract early-career physicians who might otherwise remain disengaged." AMA JoEwas exactly this type of value-added offering that not only served students and trainees, but also educators across health professions. Anecdotally, many health profession educators we know routinely use pieces from AMA JoE in their teaching and now lament its demise.

The AMA has reportedly promisedopens in a new tab or window to keep the historical content of the journal accessible on the AMA JoE website. This is no consolation for the students, residents, and fellows who were working on future issues, but it means the legacy of the journal will live on. Someday, we'd like to believe it might even be revived.

For now, we mourn the loss of AMA JoE for the field of bioethics. Even more, we mourn what the AMA's sudden elimination of its ethics journal might mean for physicians, the profession, and the public."

opens in a new tab or window(AMA JoE) -- has been a stalwart in giving a voice to newcomers to the field.

Saturday, November 15, 2025

Pope Leo XIV’s important warning on ethics of AI and new technology; The Fresno Bee, November 15, 2025

 Andrew Fiala , The Fresno Bee; Pope Leo XIV’s important warning on ethics of AI and new technology

"Recently, Pope Leo XIV addressed a conference on artificial intelligence in Rome, where he emphasized the need for deeper consideration of the “ethical and spiritual weight” of new technologies...

This begins with the insight that human beings are tool-using animals. Tools extend and amplify our operational power, and they can also either enhance or undermine who we are and what we care about. 

Whether we are enhancing or undermining our humanity ought to be the focus of moral reflection on technology.

This is a crucial question in the AI-era. The AI-revolution should lead us to ask fundamental questions about the ethical and spiritual side of technological development. AI is already changing how we think about intellectual work, such as teaching and learning. Human beings are already interacting with artificial systems that provide medical, legal, psychological and even spiritual advice. Are we prepared for all of this morally, culturally and spiritually?...

At the dawn of the age of artificial intelligence, we need a corresponding new dawn of critical moral judgment. Now is the time for philosophers, theologians and ordinary citizens to think deeply about the philosophy of technology and the values expressed or embodied in our tools. 

It will be exciting to see what the wizards of Silicon Valley will come up with next. But wizardry without wisdom is dangerous."

We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for.; The Washington Post, November 12, 2025

 

, The Washington Post; We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for.

 OpenAI has largely promoted ChatGPT as a productivity tool, and in many conversations users asked for help with practical tasks such as retrieving information. But in more than 1 in 10 of the chats The Post analyzed, people engaged the chatbot in abstract discussions, musing on topics like their ideas for breakthrough medical treatments or personal beliefs about the nature of reality.

Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work. (The Post has a content partnership with OpenAI.)...

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”"

Friday, November 14, 2025

Cleveland attorney’s use of AI in court filings raises ethical questions for legal profession; Cleveland.com, November 12, 2025

 

, Cleveland.com; Cleveland attorney’s use of AI in court filings raises ethical questions for legal profession

"A Cleveland defense attorney is under scrutiny in two counties after submitting court filings containing fabrications generated by artificial intelligence — a case that’s prompting broader questions about how lawyers are ethically navigating the use of AI tools in legal practice.

William Norman admitted that a paralegal in his office used ChatGPT to draft a motion to reopen a murder conviction appeal. The document included quotes that did not exist in the trial transcript and misrepresented statements made by the prosecutor."

AMA ethics journal shutters after 26 years; Retraction Watch, November 13, 2025

Retraction Watch; AMA ethics journal shutters after 26 years 

"The American Medical Association will cease publication of its ethics journal at the end of this year. 

The AMA Journal of Ethics, an open access, peer-reviewed journal was founded in 1999 under the name Virtual Mentor

“The loss of the AMA JoE will be most acutely felt by medical students and trainees, since it had a unique production model that included them in the process,” said Matthew Wynia, a physician and bioethicist at the University of Colorado whose work has been featured in the journal and who previously led the AMA Institute for Ethics.

The journal  publishes monthly issues on a specific theme, such as private equity in health care, antimicrobial resistance, palliative surgery and more. The journal also covered ethics in publishing and research, including a 2015 article titled “How Publish or Perish Promotes Inaccuracy in Science—and Journalism” written by Retraction Watch’s cofounder Ivan Oransky...

The journal’s website will remain online with all content freely available, “in keeping with our guiding premise that ethics inquiry is a public good,” Audiey C. Kao, editor-in-chief of the AMA Journal of Ethics and vice president of the AMA’s Ethics Group for more than two decades, wrote in a statement on the journal’s website. “With humility, I am hopeful and confident that this archived journal content will stay evergreen for years to come.”

The AMA did not provide a reason for the decision to shutter the journal." 

‘This Is the War Against Human Nature’ Paul Kingsnorth argues technology is killing us - physically and spiritually.; The New York Times, November 14, 2025

‘This Is the War Against Human Nature’: Paul Kingsnorth argues technology is killing us - physically and spiritually. 

"A lot of people, myself included, are worried about where technology is taking the human race, and especially how we can stay human in an age of artificial intelligence.

But my guest this week thinks we’re not worried enough. That some kind of apocalypse is all but inevitable — if it isn’t already upon us. That what’s needed now are strategies of resistance, endurance and escape.

And he practices what he preaches, having retreated to the west of Ireland with his family — the better to keep them out of the clutches of what he calls the machine.

But he’s come back to us, for a time, bearing a prophetic message.

Paul Kingsnorth is a novelist and a critic, an environmental activist and a convert to Eastern Orthodoxy. His new book is “Against the Machine: On the Unmaking of Humanity.”

Inside Colorado's "bullish with guardrails" AI approach; Axios, November 13, 2025

 John Frank, Ashley Gold, Axios ; Inside Colorado's "bullish with guardrails" AI approach

"Colorado's approach to integrating artificial intelligence into government functions is "bullish with guardrails."

Why it matters: Colorado offers a model for balancing AI innovation with safety, barring the technology from "anything that looks or smells or could possibly be thought of as a consequential decision," David Edinger, the state's chief information officer, told Axios in an interview.


Driving the news: The approach is a directive from Gov. Jared Polis, a former technology entrepreneur who encouraged the state's technology office to embrace AI in government.


The state's Office of Information and Technology created a framework for AI use with the NIST AI Risk Management Framework, considering the needs of different state agencies.


  • The technology is making office work and mundane tasks easier and state employees with disabilities said AI made them more productive."