Friday, November 14, 2025

AMA ethics journal shutters after 26 years; Retraction Watch, November 13, 2025

Retraction Watch; AMA ethics journal shutters after 26 years 

"The American Medical Association will cease publication of its ethics journal at the end of this year. 

The AMA Journal of Ethics, an open access, peer-reviewed journal was founded in 1999 under the name Virtual Mentor

“The loss of the AMA JoE will be most acutely felt by medical students and trainees, since it had a unique production model that included them in the process,” said Matthew Wynia, a physician and bioethicist at the University of Colorado whose work has been featured in the journal and who previously led the AMA Institute for Ethics.

The journal  publishes monthly issues on a specific theme, such as private equity in health care, antimicrobial resistance, palliative surgery and more. The journal also covered ethics in publishing and research, including a 2015 article titled “How Publish or Perish Promotes Inaccuracy in Science—and Journalism” written by Retraction Watch’s cofounder Ivan Oransky...

The journal’s website will remain online with all content freely available, “in keeping with our guiding premise that ethics inquiry is a public good,” Audiey C. Kao, editor-in-chief of the AMA Journal of Ethics and vice president of the AMA’s Ethics Group for more than two decades, wrote in a statement on the journal’s website. “With humility, I am hopeful and confident that this archived journal content will stay evergreen for years to come.”

The AMA did not provide a reason for the decision to shutter the journal." 

‘This Is the War Against Human Nature’ Paul Kingsnorth argues technology is killing us - physically and spiritually.; The New York Times, November 14, 2025

‘This Is the War Against Human Nature’: Paul Kingsnorth argues technology is killing us - physically and spiritually. 

"A lot of people, myself included, are worried about where technology is taking the human race, and especially how we can stay human in an age of artificial intelligence.

But my guest this week thinks we’re not worried enough. That some kind of apocalypse is all but inevitable — if it isn’t already upon us. That what’s needed now are strategies of resistance, endurance and escape.

And he practices what he preaches, having retreated to the west of Ireland with his family — the better to keep them out of the clutches of what he calls the machine.

But he’s come back to us, for a time, bearing a prophetic message.

Paul Kingsnorth is a novelist and a critic, an environmental activist and a convert to Eastern Orthodoxy. His new book is “Against the Machine: On the Unmaking of Humanity.”

Inside Colorado's "bullish with guardrails" AI approach; Axios, November 13, 2025

 John Frank, Ashley Gold, Axios ; Inside Colorado's "bullish with guardrails" AI approach

"Colorado's approach to integrating artificial intelligence into government functions is "bullish with guardrails."

Why it matters: Colorado offers a model for balancing AI innovation with safety, barring the technology from "anything that looks or smells or could possibly be thought of as a consequential decision," David Edinger, the state's chief information officer, told Axios in an interview.


Driving the news: The approach is a directive from Gov. Jared Polis, a former technology entrepreneur who encouraged the state's technology office to embrace AI in government.


The state's Office of Information and Technology created a framework for AI use with the NIST AI Risk Management Framework, considering the needs of different state agencies.


  • The technology is making office work and mundane tasks easier and state employees with disabilities said AI made them more productive."

‘South Park’ addresses AI-generated videos and copyright with Totoro, Trump and Bluey; The Los Angeles Times, November 13, 2025

Kaitlyn Huamani , Los Angeles Times; ‘South Park’ addresses AI-generated videos and copyright with Totoro, Trump and Bluey

"Droopy Dog, Rocky, Bullwinkle, Popeye and even the beloved preschool character Bluey are mentioned or make appearances in the episode. Representatives for Studio Ghibli also appear, offering a voice of reason in the madness, saying, “You cannot just do whatever you want with someone else’s IP.”"

Who Pays When A.I. Is Wrong?; The New York Times, November 12, 2025

 , The New York Times; Who Pays When A.I. Is Wrong?

"Search results that Gemini, Google’s artificial intelligence technology, delivered at the top of the page included the falsehoods. And mentions of a legal settlement populated automatically when they typed “Wolf River Electric” in the search box.

With cancellations piling up and their attempts to use Google’s tools to correct the issues proving fruitless, Wolf River executives decided they had no choice but to sue the tech giant for defamation.

“We put a lot of time and energy into building up a good name,” said Justin Nielsen, who founded Wolf River with three of his best friends in 2014 and helped it grow into the state’s largest solar contractor. “When customers see a red flag like that, it’s damn near impossible to win them back.”

Theirs is one of at least six defamation cases filed in the United States in the past two years over content produced by A.I. tools that generate text and images. They argue that the cutting-edge technology not only created and published false, damaging information about individuals or groups but, in many cases, continued putting it out even after the companies that built and profit from the A.I. models were made aware of the problem.

Unlike other libel or slander suits, these cases seek to define content that was not created by human beings as defamatory — a novel concept that has captivated some legal experts."

Meet chatbot Jesus: Churches tap AI to save souls — and time; Axios, November 12, 2025

 Russell Contreras , Isaac Avilucea, Axios; Meet chatbot Jesus: Churches tap AI to save souls — and time

 "A new digital awakening is unfolding in churches, where pastors and prayer apps are turning to artificial intelligence to reach worshippers, personalize sermons, and power chatbots that resemble God. 

Why it matters: AI is helping some churches stay relevant in the face of shrinking staff, empty pews and growing online audiences. But the practice raises new questions about who, or what, is guiding the flock.


  • New AI-powered apps allow you to "text with Jesus" or "talk to the Bible," giving the impression you are communicating with a deity or angel. 

  • Other apps can create personalized prayers, let you confess your sins or offer religious advice on life's decisions.

  • "What could go wrong?" Robert P. Jones, CEO of the nonpartisan Public Religion Research Institute, sarcastically asks. 

State of play: The U.S. could see an unprecedented 15,000 churches shut their doors this year as a record number of Americans (29%) now are identifying as religiously unaffiliated.


  • Megachurches are consolidating the remaining faithful, but even the most charismatic pastors struggle to offer private counseling with such large congregations.

Zoom in: In recent months, churches have been deploying chatbots to answer frequently asked questions such as service times and event details, and even to share scripture.


  • EpiscoBot, a chatbot developed by the TryTank Research Institute for the Episcopal Church, responds to spiritual or faith-related queries, drawing on church resources.

  • Other AI apps analyze congregational data (attendance and engagement) to tailor outreach and communications.

  • And more pastors are admitting that they use AI to assist in creating sermons or reduce writing time."

Thursday, November 13, 2025

AI Regulation is Not Enough. We Need AI Morals; Time, November 11, 2025

 Nicole Brachetti Peretti , Time; AI Regulation is Not Enough. We Need AI Morals

"Pope Leo XIV recently called for “builders of AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.” 

Some tech leaders, including Andreessen Horowitz cofounder Marc Andreessen have mocked such calls. But to do so is a mistake. We don’t just need AI regulation—we need AI morals." 

OpenAI copyright case reveals 'ease with which generative AI can devastate the market', says PA; The Bookseller, November 12, 2025

MATILDA BATTERSBY , The Bookseller; OpenAI copyright case reveals 'ease with which generative AI can devastate the market', says PA

"A judge’s ruling that legal action by authors against OpenAI for copyright infringement can go ahead reveals “the ease with which generative AI can devastate the market”, according to the Publishers Association (PA).

Last week, a federal judge in the US refused OpenAI’s attempts to dismiss claims by authors that text summaries of published works by ChatGPT (which is owned by OpenAI) infringes their copyrights.

The lawsuit, which is being heard in New York, brings together cases from a number of authors, as well as the Authors Guild, filed in various courts.

In his ruling, which upheld the authors’ right to attempt to sue OpenAI, District Judge Sidney Stein compared George RR Martin’s Game of Thrones to summaries of the novel created by ChatGPT.

Judge Stein said: “[A] discerning observer could easily conclude that this detailed summary is substantially similar to Martin’s original work because the summary conveys the overall tone and feel of the original work by parroting the plot, characters and themes of the original.”

The class action consolidates 12 complaints being brought against OpenAI and Microsoft. It argues copyrighted books were reproduced to train OpenAI’s artificial intelligence large language models (LLM) and, crucially, that LLMs, including ChatGPT, can infringe copyright via their output, ie the text produced when asked a question.

This landmark legal case is the first to examine whether the output of an AI chatbot infringes copyright, rather than looking at whether the training of the model was an infringement."

Wednesday, November 12, 2025

Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings; The New York Times, November 7, 2025

 , The New York Times; Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings

"Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it.

While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate.

But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes.

“These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.”...

The problem, though, keeps getting worse.

That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day.

Many lawyers, including Mr. Freund and Mr. Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.”

Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers."

Rock and Roll Hall of Fame Wins Van Halen Photo Copyright Claim; Bloomberg Law, November 11, 2025

Jennifer Kay, Bloomberg Law; Rock and Roll Hall of Fame Wins Van Halen Photo Copyright Claim

"The Rock and Roll Hall of Fame and Museum’s exhibition of a photographer’s images of Eddie Van Halen constitutes fair use and so doesn’t violate copyright laws, a federal judge said."

AI Has Sent Copyright Laws Into Chaos. What You Need to Know About Your Rights Online; CNET, November 11, 2025

Katelyn Chedraoui, CNET ; AI Has Sent Copyright Laws Into Chaos. What You Need to Know About Your Rights Online

"You might not think about copyright very often, but we are all copyright owners and authors. In the age of generative AI, copyright has quickly become one of the most important issues in the development and outputs of chatbotsimage and video generators...

What does all of this mean for the future?

Copyright owners are in a bit of a holding pattern for now. But beyond the legal and ethical implications, copyright in the age of AI raises important questions about the value of creative work, the cost of innovation and the ways in which we need or ought to have government intervention and protections."

‘This is fascist America’: Anish Kapoor may sue after border agents pose by his sculpture; The Guardian, November 12, 2025

 , The Guardian ; ‘This is fascist America’: Anish Kapoor may sue after border agents pose by his sculpture

"The artist Anish Kapoor is considering taking legal action after border patrol agents posed for a photo in front of his Cloud Gate sculpture in Chicago, saying the scene represented “fascist America”...

Kapoor took legal action against the National Rifle Association (NRA) after they used an image of Cloud Gate, which was installed in 2006 and is known locally as “the Bean”, in an advert.

He settled out of court with the NRA in 2018. “It’s a bit more complicated with this,” Kapoor said of the more recent incident, “because they’re a full, if you like, national army unit.”"

OpenAI used song lyrics in violation of copyright laws, German court says; Reuters, November 11, 2025

  and , Reuters ; OpenAI used song lyrics in violation of copyright laws, German court says

"OpenAI's chatbot ChatGPT violated German copyright laws by reproducing lyrics from songs by best-selling musician Herbert Groenemeyer and others, a court ruled on Tuesday, in a closely watched case against the U.S. firm over its use of lyrics to train its language models.

The regional court in Munich found that the company trained its AI on protected content from nine German songs, including Groenemeyer's hits "Maenner" and "Bochum"."

You’re a Computer Science Major. Don’t Panic.; The New York Times, November 12, 2025

 Mary Shaw and , The New York Times ; You’re a Computer Science Major. Don’t Panic.

"The future of computer science education is to teach students how to master the indispensable skill of supervision.

Why? Because the speed and efficiency of using A.I. to write code is balanced by the reality that it often gets things wrong. These tools are designed to produce results that look convincing, but may still contain errors. A recent survey showed that over half of professional developers use A.I. tools daily, but only about one-third trust their accuracy. When asked what their greatest frustration is about using A.I. tools, two-thirds of respondents answered, “A.I. solutions that are almost right but not quite.”

There is still a need for humans to play a role in coding — a supervisory one, where programmers oversee the use of A.I. tools, determine if A.I.-generated code does what it is supposed to do and make essential repairs to defective code."

Federal Cuts, Immigration Raids and a Slowing Economy Hit Rural Libraries; The New York Times, November 12, 2025

 

, The New York Times; Federal Cuts, Immigration Raids and a Slowing Economy Hit Rural Libraries

"“A library is in a lot of ways a kind of civic symbol, a demonstration of a community’s commitment to itself. So what does it mean if that goes away?”"

Tuesday, November 11, 2025

AI country singer Breaking Rust tops Billboard with ‘Walk My Walk’; San Francisco Chronicle, November 10, 2025

Aidin Vaziri, San Francisco Chronicle; AI country singer Breaking Rust tops Billboard with ‘Walk My Walk’

"A country hit made by artificial intelligence has climbed to the top of a Billboard chart — a first for the genre.

The song, “Walk My Walk,” by an artist known as Breaking Rust, is now No. 1 on  Billboard’s Country Digital Song Sales chart. But the brooding, gravel-voiced cowboy behind the hit doesn’t exist. At least, not in the traditional sense. 

He’s an AI creation with millions of streams, tens of thousands of followers and no verifiable human footprint." 

Pitt School of Medicine Student Innovator is Empowering People to Take Charge of Their Healthcare; University of Pittsburgh Office of Innovation & Entrepreneurship, October 21, 2025

KAREN WOOLSTRUM , University of Pittsburgh Office of Innovation & Entrepreneurship; Pitt School of Medicine Student Innovator is Empowering People to Take Charge of Their Healthcare

"Inspiration Strikes in the ER

While her research focuses on cystic fibrosis, Li’s entrepreneurial journey began during a rotation in the emergency room. It dawned on her that many patients in the ER could be empowered to take control of their own health monitoring and potentially avoid traumatic and costly ER visits. She quickly devised an idea for an electronic stethoscope that people can use to measure vital signs of the heart and lungs from home.

In collaboration with a friend, Akshaya Anand, a machine-learning graduate student from the University of Maryland, she founded Korion Health and entered the 2022 Randall Family Big Idea Competition hosted by the Big Idea Center, Pitt’s hub for student innovation (part of the OIE).

They were awarded a modest $2,000 4th-place prize, but the value they received from the month-long competition and mentorship extended far beyond that. The experience of crafting her pitch and having her idea validated in the eyes of experienced entrepreneurs gave her the confidence to continue pursuing the device’s commercial potential.

Next up was a pitch competition hosted by the Product Development Managers Association (PDMA) in which she won free first place in the graduate-student category, with the award including consulting hours from local companies such as Bally Design and Lexicon Design that she said “helped me take my half-baked idea and turn it into a prototype to show to investors.”

“This was a high yield for the effort. If it’s something they can hold in their hands it really helps communicate the value proposition,” she added.

From there, things began to snowball. On the same day that she won the UpPrize Social Innovation Competition sponsored by Bank of New York in the racial equity category ($75k), she won the first place prize from the American Heart Association’s EmPOWERED to Serve Business Accelerator ($50k). The resulting publicity attracted the attention of organizers of the Hult Prize Competition, a global student startup competition that receives thousands of applicants each year, who invited her to apply.

“I didn’t know anything about the Hult Prize competition. At first, I thought it was spam,” she admitted.

She had no illusions of advancing to the finals near London, let alone winning the top prize of $1 million: until she did."

Sunday, November 9, 2025

California Prosecutor Says AI Caused Errors in Criminal Case; Sacramento Bee via Government Technology, November 7, 2025

 Sharon Bernstein, Sacramento Bee via Government Technology; California Prosecutor Says AI Caused Errors in Criminal Case

"Northern California prosecutors used artificial intelligence to write a criminal court filing that contained references to nonexistent legal cases and precedents, Nevada County District Attorney Jesse Wilson said in a statement.

The motion included false information known in artificial intelligence circles as “hallucinations,” meaning that it was invented by the AI software asked to write the material, Wilson said. It was filed in connection with the case of Kalen Turner, who was accused of five felony and two misdemeanor drug counts, he said.

The situation is the latest example of the potential pitfalls connected with the growing use of AI. In fields such as law, errors in AI-generated briefs could impact the freedom of a person accused of a crime. In health care, AI analysis of medical necessity has resulted in the denial of some types of care. In April, A 16-year-old Rancho Santa Margarita boy killed himself after discussing suicidal thoughts with an AI chatbot, prompting a new California law aimed at protecting vulnerable users.

“While artificial intelligence can be a useful research tool, it remains an evolving technology with limitations — including the potential to generate ‘hallucinated’ citations,” Wilson said. “We are actively learning the fluid dynamics of AI-assisted legal work and its possible pitfalls.”

The AI spending frenzy is so huge that it makes no sense; The Washington Post, November 7, 2025

, The Washington Post; The AI spending frenzy is so huge that it makes no sense

" In just the past year, the four richest companies developing AI — Microsoft, Google, Amazon and Meta — have spent roughly $360 billion combined for big-ticket projects, which included building AI data centers and stuffing them with computer chips and equipment, according to my analysis of financial disclosures.

(Amazon founder Jeff Bezos owns The Washington Post.)

That same amount of money could pay for about four years’ worth of the Supplemental Nutrition Assistance Program (SNAP), the federal government program that distributes more than $90 billion in yearly food assistance to 42 million Americans. SNAP benefits are in limbo for now during the government shutdown...

Eight of the world’s top 10 most valuable companies are AI-centric or AI-ish American corporate giants — Nvidia, Apple, Microsoft, Google, Amazon, Broadcom, Meta and Tesla. That’s according to tallies from S&P Global Market Intelligence based on the total price of the companies’ stock held by investors."


A.I. Is on Its Way to Something Even More Remarkable Than Intelligence; The New York Times, November 8, 2025

Barbara Gail Montero, The New York Times; A.I. Is on Its Way to Something Even More Remarkable Than Intelligence

"Some worry that if A.I. becomes conscious, it will deserve our moral consideration — that it will have rights, that we will no longer be able to use it however we like, that we might need to guard against enslaving it. Yet as far as I can tell, there is no direct implication from the claim that a creature is conscious to the conclusion that it deserves our moral consideration. Or if there is one, a vast majority of Americans, at least, seem unaware of it. Only a small percentage of Americans are vegetarians."

Saturday, November 8, 2025

Stability AI’s legal win over Getty leaves copyright law in limbo; The Verge, November 5, 2025

Robert Hart , The Verge; Stability AI’s legal win over Getty leaves copyright law in limbo

"Stability AI, the creator of popular AI art tool Stable Diffusion, was largely victorious against Getty Images on Tuesday in a British legal battle over the material used to train AI models. The case originally looked set to produce a landmark ruling on AI and copyright in the UK, but it landed with a thud and failed to set any clear precedent for the big question dividing AI companies and creative firms: whether AI models need permission to train on copyrighted works.

The case, first filed in 2023, is the first major AI copyright claim to reach England’s High Court, though the verdict offers little clarity to other AI companies and rightsholders."

Friday, November 7, 2025

To Preserve Records, Homeland Security Now Relies on Officials to Take Screenshots; The New York Times, November 6, 2025

  , The New York Times; To Preserve Records, Homeland Security Now Relies on Officials to Take Screenshots


[Kip Currier: This new discretionary DHS records policy is counter to sound ethics practices and democracy-centered values.

Preservation of records promotes transparency, the historical record, accountability, access to information, informed citizenries, the right to petition one's government, free and independent presses, and more. The new DHS records policy undermines all of the above.]



[Excerpt]

"The Department of Homeland Security has stopped using software that automatically captured text messages and saved trails of communication between officials, according to sworn court statements filed this week.

Instead, the agency began in April to require officials to manually take screenshots of their messages to comply with federal records laws, citing cybersecurity concerns with the autosave software.

Public records experts say the new record-keeping policy opens ample room for both willful and unwitting noncompliance with federal open records laws in an administration that has already shown a lack of interest in, or willingness to skirt, records laws. That development could be particularly troubling as the department executes President Trump’s aggressive agenda of mass deportations, a campaign that has included numerous accusations of misconduct by law enforcement officials, the experts said.

“If you are an immigration official or an agent and believe that the public might later criticize you, or that your records could help you be held accountable, would you go out of the way to preserve those records that might expose wrongdoing?” said Lauren Harper, who advocates government transparency at the Freedom of the Press Foundation."

The ethics of AI, from policing to healthcare; KPBS; November 3, 2025

Jade Hindmon / KPBS Midday Edition Host,  Ashley Rusch / Producer, KPBS; The ethics of AI, from policing to healthcare

"Artificial intelligence is everywhere — from our office buildings, to schools and government agencies.

The Chula Vista Police Department is joining cities to use AI to write police reports. Several San Diego County police departments also use AI-powered drones to support their work. 

Civil liberties advocates are concerned about privacy, safety and surveillance. 

On Midday Edition, we sit down with an expert in AI ethics to discuss the philosophical questions of responsible AI.

Guest:

  • David Danks, professor of data science, philosophy and policy at UC San Diego"

Thursday, November 6, 2025

Staying Human in the Age of AI; Duquesne University, Grefenstette Center for Ethics, November 6-7, 2025

 Duquesne University, Grefenstette Center for Ethics2025 Tech Ethics Symposium: Staying Human in the Age of AI

"The Grefenstette Center for Ethics is excited to announce our sixth annual Tech Ethics Symposium, Staying Human in the Age of AI, which will be held in person at Duquesne University's Power Center and livestreamed online. This year's event will feature internationally leading figures in the ongoing discussion of ethical and responsible uses of AI. The two-day Symposium is co-sponsored by the Patricia Doherty Yoder Institute for Ethics and Integrity in Journalism and Media, the Center for Teaching Excellence, and the Albert P. Viragh Institute for Ethics in Business.

We are excited to once again host a Student Research Poster Competition at the Symposium. All undergraduate and graduate student research posters on any topic in the area of tech/digital/AI ethics are welcome. Accepted posters will be awarded $75 to offset printing costs. In addition to that award, undergraduate posters will compete for the following prizes: the Outstanding Researcher Award, the Ethical PA Award, and the Pope Francis Award. Graduate posters can win Grand Prize or Runner-Up. All accepted posters are eligible for an Audience Choice award, to be decided by Symposium attendees on the day of the event! Student Research Poster submissions will be due Friday, October 17. Read the full details of the 2025 Student Research Poster Competition.

The Symposium is free to attend and open to all university students, faculty, and staff, as well as community members. Registrants can attend in person or experience the Symposium via livestream. Registration is now open!"