Friday, November 14, 2025

Meet chatbot Jesus: Churches tap AI to save souls — and time; Axios, November 12, 2025

 Russell Contreras , Isaac Avilucea, Axios; Meet chatbot Jesus: Churches tap AI to save souls — and time

 "A new digital awakening is unfolding in churches, where pastors and prayer apps are turning to artificial intelligence to reach worshippers, personalize sermons, and power chatbots that resemble God. 

Why it matters: AI is helping some churches stay relevant in the face of shrinking staff, empty pews and growing online audiences. But the practice raises new questions about who, or what, is guiding the flock.


  • New AI-powered apps allow you to "text with Jesus" or "talk to the Bible," giving the impression you are communicating with a deity or angel. 

  • Other apps can create personalized prayers, let you confess your sins or offer religious advice on life's decisions.

  • "What could go wrong?" Robert P. Jones, CEO of the nonpartisan Public Religion Research Institute, sarcastically asks. 

State of play: The U.S. could see an unprecedented 15,000 churches shut their doors this year as a record number of Americans (29%) now are identifying as religiously unaffiliated.


  • Megachurches are consolidating the remaining faithful, but even the most charismatic pastors struggle to offer private counseling with such large congregations.

Zoom in: In recent months, churches have been deploying chatbots to answer frequently asked questions such as service times and event details, and even to share scripture.


  • EpiscoBot, a chatbot developed by the TryTank Research Institute for the Episcopal Church, responds to spiritual or faith-related queries, drawing on church resources.

  • Other AI apps analyze congregational data (attendance and engagement) to tailor outreach and communications.

  • And more pastors are admitting that they use AI to assist in creating sermons or reduce writing time."

Thursday, November 13, 2025

AI Regulation is Not Enough. We Need AI Morals; Time, November 11, 2025

 Nicole Brachetti Peretti , Time; AI Regulation is Not Enough. We Need AI Morals

"Pope Leo XIV recently called for “builders of AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.” 

Some tech leaders, including Andreessen Horowitz cofounder Marc Andreessen have mocked such calls. But to do so is a mistake. We don’t just need AI regulation—we need AI morals." 

OpenAI copyright case reveals 'ease with which generative AI can devastate the market', says PA; The Bookseller, November 12, 2025

MATILDA BATTERSBY , The Bookseller; OpenAI copyright case reveals 'ease with which generative AI can devastate the market', says PA

"A judge’s ruling that legal action by authors against OpenAI for copyright infringement can go ahead reveals “the ease with which generative AI can devastate the market”, according to the Publishers Association (PA).

Last week, a federal judge in the US refused OpenAI’s attempts to dismiss claims by authors that text summaries of published works by ChatGPT (which is owned by OpenAI) infringes their copyrights.

The lawsuit, which is being heard in New York, brings together cases from a number of authors, as well as the Authors Guild, filed in various courts.

In his ruling, which upheld the authors’ right to attempt to sue OpenAI, District Judge Sidney Stein compared George RR Martin’s Game of Thrones to summaries of the novel created by ChatGPT.

Judge Stein said: “[A] discerning observer could easily conclude that this detailed summary is substantially similar to Martin’s original work because the summary conveys the overall tone and feel of the original work by parroting the plot, characters and themes of the original.”

The class action consolidates 12 complaints being brought against OpenAI and Microsoft. It argues copyrighted books were reproduced to train OpenAI’s artificial intelligence large language models (LLM) and, crucially, that LLMs, including ChatGPT, can infringe copyright via their output, ie the text produced when asked a question.

This landmark legal case is the first to examine whether the output of an AI chatbot infringes copyright, rather than looking at whether the training of the model was an infringement."

Wednesday, November 12, 2025

Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings; The New York Times, November 7, 2025

 , The New York Times; Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings

"Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it.

While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate.

But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes.

“These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.”...

The problem, though, keeps getting worse.

That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day.

Many lawyers, including Mr. Freund and Mr. Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.”

Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers."

Rock and Roll Hall of Fame Wins Van Halen Photo Copyright Claim; Bloomberg Law, November 11, 2025

Jennifer Kay, Bloomberg Law; Rock and Roll Hall of Fame Wins Van Halen Photo Copyright Claim

"The Rock and Roll Hall of Fame and Museum’s exhibition of a photographer’s images of Eddie Van Halen constitutes fair use and so doesn’t violate copyright laws, a federal judge said."

AI Has Sent Copyright Laws Into Chaos. What You Need to Know About Your Rights Online; CNET, November 11, 2025

Katelyn Chedraoui, CNET ; AI Has Sent Copyright Laws Into Chaos. What You Need to Know About Your Rights Online

"You might not think about copyright very often, but we are all copyright owners and authors. In the age of generative AI, copyright has quickly become one of the most important issues in the development and outputs of chatbotsimage and video generators...

What does all of this mean for the future?

Copyright owners are in a bit of a holding pattern for now. But beyond the legal and ethical implications, copyright in the age of AI raises important questions about the value of creative work, the cost of innovation and the ways in which we need or ought to have government intervention and protections."

‘This is fascist America’: Anish Kapoor may sue after border agents pose by his sculpture; The Guardian, November 12, 2025

 , The Guardian ; ‘This is fascist America’: Anish Kapoor may sue after border agents pose by his sculpture

"The artist Anish Kapoor is considering taking legal action after border patrol agents posed for a photo in front of his Cloud Gate sculpture in Chicago, saying the scene represented “fascist America”...

Kapoor took legal action against the National Rifle Association (NRA) after they used an image of Cloud Gate, which was installed in 2006 and is known locally as “the Bean”, in an advert.

He settled out of court with the NRA in 2018. “It’s a bit more complicated with this,” Kapoor said of the more recent incident, “because they’re a full, if you like, national army unit.”"

OpenAI used song lyrics in violation of copyright laws, German court says; Reuters, November 11, 2025

  and , Reuters ; OpenAI used song lyrics in violation of copyright laws, German court says

"OpenAI's chatbot ChatGPT violated German copyright laws by reproducing lyrics from songs by best-selling musician Herbert Groenemeyer and others, a court ruled on Tuesday, in a closely watched case against the U.S. firm over its use of lyrics to train its language models.

The regional court in Munich found that the company trained its AI on protected content from nine German songs, including Groenemeyer's hits "Maenner" and "Bochum"."

You’re a Computer Science Major. Don’t Panic.; The New York Times, November 12, 2025

 Mary Shaw and , The New York Times ; You’re a Computer Science Major. Don’t Panic.

"The future of computer science education is to teach students how to master the indispensable skill of supervision.

Why? Because the speed and efficiency of using A.I. to write code is balanced by the reality that it often gets things wrong. These tools are designed to produce results that look convincing, but may still contain errors. A recent survey showed that over half of professional developers use A.I. tools daily, but only about one-third trust their accuracy. When asked what their greatest frustration is about using A.I. tools, two-thirds of respondents answered, “A.I. solutions that are almost right but not quite.”

There is still a need for humans to play a role in coding — a supervisory one, where programmers oversee the use of A.I. tools, determine if A.I.-generated code does what it is supposed to do and make essential repairs to defective code."

Federal Cuts, Immigration Raids and a Slowing Economy Hit Rural Libraries; The New York Times, November 12, 2025

 

, The New York Times; Federal Cuts, Immigration Raids and a Slowing Economy Hit Rural Libraries

"“A library is in a lot of ways a kind of civic symbol, a demonstration of a community’s commitment to itself. So what does it mean if that goes away?”"

Tuesday, November 11, 2025

AI country singer Breaking Rust tops Billboard with ‘Walk My Walk’; San Francisco Chronicle, November 10, 2025

Aidin Vaziri, San Francisco Chronicle; AI country singer Breaking Rust tops Billboard with ‘Walk My Walk’

"A country hit made by artificial intelligence has climbed to the top of a Billboard chart — a first for the genre.

The song, “Walk My Walk,” by an artist known as Breaking Rust, is now No. 1 on  Billboard’s Country Digital Song Sales chart. But the brooding, gravel-voiced cowboy behind the hit doesn’t exist. At least, not in the traditional sense. 

He’s an AI creation with millions of streams, tens of thousands of followers and no verifiable human footprint." 

Pitt School of Medicine Student Innovator is Empowering People to Take Charge of Their Healthcare; University of Pittsburgh Office of Innovation & Entrepreneurship, October 21, 2025

KAREN WOOLSTRUM , University of Pittsburgh Office of Innovation & Entrepreneurship; Pitt School of Medicine Student Innovator is Empowering People to Take Charge of Their Healthcare

"Inspiration Strikes in the ER

While her research focuses on cystic fibrosis, Li’s entrepreneurial journey began during a rotation in the emergency room. It dawned on her that many patients in the ER could be empowered to take control of their own health monitoring and potentially avoid traumatic and costly ER visits. She quickly devised an idea for an electronic stethoscope that people can use to measure vital signs of the heart and lungs from home.

In collaboration with a friend, Akshaya Anand, a machine-learning graduate student from the University of Maryland, she founded Korion Health and entered the 2022 Randall Family Big Idea Competition hosted by the Big Idea Center, Pitt’s hub for student innovation (part of the OIE).

They were awarded a modest $2,000 4th-place prize, but the value they received from the month-long competition and mentorship extended far beyond that. The experience of crafting her pitch and having her idea validated in the eyes of experienced entrepreneurs gave her the confidence to continue pursuing the device’s commercial potential.

Next up was a pitch competition hosted by the Product Development Managers Association (PDMA) in which she won free first place in the graduate-student category, with the award including consulting hours from local companies such as Bally Design and Lexicon Design that she said “helped me take my half-baked idea and turn it into a prototype to show to investors.”

“This was a high yield for the effort. If it’s something they can hold in their hands it really helps communicate the value proposition,” she added.

From there, things began to snowball. On the same day that she won the UpPrize Social Innovation Competition sponsored by Bank of New York in the racial equity category ($75k), she won the first place prize from the American Heart Association’s EmPOWERED to Serve Business Accelerator ($50k). The resulting publicity attracted the attention of organizers of the Hult Prize Competition, a global student startup competition that receives thousands of applicants each year, who invited her to apply.

“I didn’t know anything about the Hult Prize competition. At first, I thought it was spam,” she admitted.

She had no illusions of advancing to the finals near London, let alone winning the top prize of $1 million: until she did."

Sunday, November 9, 2025

California Prosecutor Says AI Caused Errors in Criminal Case; Sacramento Bee via Government Technology, November 7, 2025

 Sharon Bernstein, Sacramento Bee via Government Technology; California Prosecutor Says AI Caused Errors in Criminal Case

"Northern California prosecutors used artificial intelligence to write a criminal court filing that contained references to nonexistent legal cases and precedents, Nevada County District Attorney Jesse Wilson said in a statement.

The motion included false information known in artificial intelligence circles as “hallucinations,” meaning that it was invented by the AI software asked to write the material, Wilson said. It was filed in connection with the case of Kalen Turner, who was accused of five felony and two misdemeanor drug counts, he said.

The situation is the latest example of the potential pitfalls connected with the growing use of AI. In fields such as law, errors in AI-generated briefs could impact the freedom of a person accused of a crime. In health care, AI analysis of medical necessity has resulted in the denial of some types of care. In April, A 16-year-old Rancho Santa Margarita boy killed himself after discussing suicidal thoughts with an AI chatbot, prompting a new California law aimed at protecting vulnerable users.

“While artificial intelligence can be a useful research tool, it remains an evolving technology with limitations — including the potential to generate ‘hallucinated’ citations,” Wilson said. “We are actively learning the fluid dynamics of AI-assisted legal work and its possible pitfalls.”

The AI spending frenzy is so huge that it makes no sense; The Washington Post, November 7, 2025

, The Washington Post; The AI spending frenzy is so huge that it makes no sense

" In just the past year, the four richest companies developing AI — Microsoft, Google, Amazon and Meta — have spent roughly $360 billion combined for big-ticket projects, which included building AI data centers and stuffing them with computer chips and equipment, according to my analysis of financial disclosures.

(Amazon founder Jeff Bezos owns The Washington Post.)

That same amount of money could pay for about four years’ worth of the Supplemental Nutrition Assistance Program (SNAP), the federal government program that distributes more than $90 billion in yearly food assistance to 42 million Americans. SNAP benefits are in limbo for now during the government shutdown...

Eight of the world’s top 10 most valuable companies are AI-centric or AI-ish American corporate giants — Nvidia, Apple, Microsoft, Google, Amazon, Broadcom, Meta and Tesla. That’s according to tallies from S&P Global Market Intelligence based on the total price of the companies’ stock held by investors."


A.I. Is on Its Way to Something Even More Remarkable Than Intelligence; The New York Times, November 8, 2025

Barbara Gail Montero, The New York Times; A.I. Is on Its Way to Something Even More Remarkable Than Intelligence

"Some worry that if A.I. becomes conscious, it will deserve our moral consideration — that it will have rights, that we will no longer be able to use it however we like, that we might need to guard against enslaving it. Yet as far as I can tell, there is no direct implication from the claim that a creature is conscious to the conclusion that it deserves our moral consideration. Or if there is one, a vast majority of Americans, at least, seem unaware of it. Only a small percentage of Americans are vegetarians."

Saturday, November 8, 2025

Stability AI’s legal win over Getty leaves copyright law in limbo; The Verge, November 5, 2025

Robert Hart , The Verge; Stability AI’s legal win over Getty leaves copyright law in limbo

"Stability AI, the creator of popular AI art tool Stable Diffusion, was largely victorious against Getty Images on Tuesday in a British legal battle over the material used to train AI models. The case originally looked set to produce a landmark ruling on AI and copyright in the UK, but it landed with a thud and failed to set any clear precedent for the big question dividing AI companies and creative firms: whether AI models need permission to train on copyrighted works.

The case, first filed in 2023, is the first major AI copyright claim to reach England’s High Court, though the verdict offers little clarity to other AI companies and rightsholders."

Friday, November 7, 2025

To Preserve Records, Homeland Security Now Relies on Officials to Take Screenshots; The New York Times, November 6, 2025

  , The New York Times; To Preserve Records, Homeland Security Now Relies on Officials to Take Screenshots


[Kip Currier: This new discretionary DHS records policy is counter to sound ethics practices and democracy-centered values.

Preservation of records promotes transparency, the historical record, accountability, access to information, informed citizenries, the right to petition one's government, free and independent presses, and more. The new DHS records policy undermines all of the above.]



[Excerpt]

"The Department of Homeland Security has stopped using software that automatically captured text messages and saved trails of communication between officials, according to sworn court statements filed this week.

Instead, the agency began in April to require officials to manually take screenshots of their messages to comply with federal records laws, citing cybersecurity concerns with the autosave software.

Public records experts say the new record-keeping policy opens ample room for both willful and unwitting noncompliance with federal open records laws in an administration that has already shown a lack of interest in, or willingness to skirt, records laws. That development could be particularly troubling as the department executes President Trump’s aggressive agenda of mass deportations, a campaign that has included numerous accusations of misconduct by law enforcement officials, the experts said.

“If you are an immigration official or an agent and believe that the public might later criticize you, or that your records could help you be held accountable, would you go out of the way to preserve those records that might expose wrongdoing?” said Lauren Harper, who advocates government transparency at the Freedom of the Press Foundation."

The ethics of AI, from policing to healthcare; KPBS; November 3, 2025

Jade Hindmon / KPBS Midday Edition Host,  Ashley Rusch / Producer, KPBS; The ethics of AI, from policing to healthcare

"Artificial intelligence is everywhere — from our office buildings, to schools and government agencies.

The Chula Vista Police Department is joining cities to use AI to write police reports. Several San Diego County police departments also use AI-powered drones to support their work. 

Civil liberties advocates are concerned about privacy, safety and surveillance. 

On Midday Edition, we sit down with an expert in AI ethics to discuss the philosophical questions of responsible AI.

Guest:

  • David Danks, professor of data science, philosophy and policy at UC San Diego"

Thursday, November 6, 2025

Staying Human in the Age of AI; Duquesne University, Grefenstette Center for Ethics, November 6-7, 2025

 Duquesne University, Grefenstette Center for Ethics2025 Tech Ethics Symposium: Staying Human in the Age of AI

"The Grefenstette Center for Ethics is excited to announce our sixth annual Tech Ethics Symposium, Staying Human in the Age of AI, which will be held in person at Duquesne University's Power Center and livestreamed online. This year's event will feature internationally leading figures in the ongoing discussion of ethical and responsible uses of AI. The two-day Symposium is co-sponsored by the Patricia Doherty Yoder Institute for Ethics and Integrity in Journalism and Media, the Center for Teaching Excellence, and the Albert P. Viragh Institute for Ethics in Business.

We are excited to once again host a Student Research Poster Competition at the Symposium. All undergraduate and graduate student research posters on any topic in the area of tech/digital/AI ethics are welcome. Accepted posters will be awarded $75 to offset printing costs. In addition to that award, undergraduate posters will compete for the following prizes: the Outstanding Researcher Award, the Ethical PA Award, and the Pope Francis Award. Graduate posters can win Grand Prize or Runner-Up. All accepted posters are eligible for an Audience Choice award, to be decided by Symposium attendees on the day of the event! Student Research Poster submissions will be due Friday, October 17. Read the full details of the 2025 Student Research Poster Competition.

The Symposium is free to attend and open to all university students, faculty, and staff, as well as community members. Registrants can attend in person or experience the Symposium via livestream. Registration is now open!"

Wednesday, November 5, 2025

Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI; ZME Science, November 4, 2025

 Tudor Tarita , ZME Science; Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI


[Kip Currier: This is a troubling, eye-opening report by Originality.ai on AI-generated books proliferating on Amazon in the sub-area of "herbal remedies". As a ZME Science article on the report suggests, if this is the state of herbal books on the world's largest bookseller platform, what is the state of other book areas and genres?

The lack of transparency and authenticity vis-a-vis AI-generated books is deeply concerning. If a potential book buyer knows that a book is principally or wholly "authored" by AI and that person still elects to purchase that book with that knowledge, that's their choice. But, as the Originality.ai report identifies, potential book buyers are being presented with fake author names on AI-generated books and are not being informed by the purveyors of AI-generated books, or the platforms that make those books accessible for purchase, that those works are not written by human experts and authors. That is deceptive business practice and consumer fraud.

Consumers should have the right to know material information about all products in the marketplace. No one would countenance (except for bad actors) children's toys deceptively containing harmful lead or dog and cat treats made with substances that can cause harm or death. Why should consumers not be concerned in similar fashion about books that purport to be created by human experts but which may contain information that can cause harm and even death in some cases? 

Myriad ethical and legal questions are implicated, such as:

  • What are the potential harms of AI-generated books that falsely pose as human authors?
  • What responsibility do platforms like Amazon have for fake products?
  • What responsibility do platforms like Amazon have for AI-generated books?
  • What do you as a consumer want to know about books that are available for purchase on platforms like Amazon?
  • What are the potential short-term and long-term implications of AI-generated books posing as human authors for consumers, authors, publishers, and societies?]


[Excerpt]

"At the top of Amazon’s “Herbal Remedies” bestseller list, The Natural Healing Handbook looked like a typical wellness guide. With leafy cover art and promises of “ancient wisdom” and “self-healing,” it seemed like a harmless book for health-conscious readers.

But “Luna Filby”, the Australian herbalist credited with writing the book, doesn’t exist.

A new investigation from Originality.ai, a company that develops tools to detect AI-generated writing, reveals that The Natural Healing Handbook and hundreds of similar titles were likely produced by artificial intelligence. The company scanned 558 paperback titles published in Amazon’s “Herbal Remedies” subcategory in 2025 and found that 82% were likely written by AI.

“We inputted Luna’s author biography, book summary, and any available sample pages,” the report states. “All came back flagged as likely AI-generated with 100% confidence.

A Forest of Fakes

It’s become hard (sometimes, almost impossible) to distinguish whether something is written by AI. So there’s often a sliver of a doubt. But according to the report, The Natural Healing Handbook is part of a sprawling canopy of probable AI-generated books. Many of them are climbing Amazon’s rankings, often outselling work by real writers...

Where This Leaves Us

AI is flooding niches that once relied on careful expertise and centuries of accumulated knowledge. Real writers are being drowned out by machines regurgitating fragments of folklore scraped from the internet.

“This is a damning revelation of the sheer scope of unlabeled, unverified, unchecked, likely AI content that has completely invaded [Amazon’s] platform,” wrote Michael Fraiman, author of the Originality.ai report.

The report looked at herbal books, but there’s likely many other niches hidden

Amazon’s publishing model allows self-published authors to flood categories for profit. And now, AI tools make it easier than ever to generate convincing, although hollow, manuscripts. Every new “Luna Filby” who hits #1 proves that the model still works.

Unless something changes, we may be witnessing the quiet corrosion of trust in consumer publishing."

Tuesday, November 4, 2025

Professors Press SCOTUS to Affirm Copyright Protection for AI-Created Works; IP Watchdog, November 3, 2025

ROSE ESFANDIARI , IP Watchdog; Professors Press SCOTUS to Affirm Copyright Protection for AI-Created Works

"On Friday, October 31, Professors Shlomit Yanisky-Ravid, Lawrence Lessig and a number of other professors and researchers filed an amicus brief with the U.S. Supreme Court in support of Dr. Stephen Thaler’s petition for a writ of certiorari in Thaler v. Perlmutter, urging the Court to grant certiorari and recognize copyright protection for works generated by artificial intelligence (AI).

The brief argued that “excluding AI-generated works from copyright protection threatens the foundations of American creativity, innovation, and economic growth,” warning that the lower court’s interpretation, which requires human authorship, disregards the “spirit of the Copyright Act.”"

AI firm wins high court ruling after photo agency’s copyright claim; The Guardian, November 4, 2025

 , The Guardian; AI firm wins high court ruling after photo agency’s copyright claim

"A London-based artificial intelligence firm has won a landmark high court case examining the legality of AI models using vast troves of copyrighted data without permission.

Stability AI, whose directors include the Oscar-winning film-maker behind Avatar, James Cameron, successfully resisted a claim from Getty Images that it had infringed the international photo agency’s copyright.

The ruling is seen as a blow to copyright owners’ exclusive right to reap the rewards of their work, with one senior lawyer, Rebecca Newman, a legal director at Addleshaw Goddard, warning it means “the UK’s secondary copyright regime is not strong enough to protect its creators”."

Monday, November 3, 2025

With AI technology rapidly advancing, ethics must evolve as well - opinion; The Jerusalem Post, November 2, 2025

AVI JORISCH, The Jerusalem Post; With AI technology rapidly advancing, ethics must evolve as well - opinion

"Wisdom over intelligence

Whether this century becomes our renaissance or our ruin will depend on a quality that can’t be coded or automated: wisdom.

For all our progress, we have not yet learned to match our technological power with moral imagination. We’ve achieved abundance without equilibrium, connection without community, knowledge without humility. The danger isn’t that machines will become more human – it’s that humans will become more machine-like, optimizing for speed and efficiency while forgetting the soul.

Humanity’s story has always been shaped by invention. The wheel, the compass, the printing press, the microchip – each expanded what we could do, but not necessarily who we could be. The Industrial Revolution lifted billions from poverty, yet it also gave us the mechanized wars of the twentieth century. Nuclear energy promised limitless power – and then birthed Hiroshima.

Today, as we stand on the edge of quantum computing, gene editing, and artificial general intelligence, the pattern repeats. The tools evolve. The ethics lag behind.

We need a new kind of moonshot – not just of science, but of spirit."