Sunday, November 16, 2025

In Memoriam: The Sudden Demise of the AMA Journal of Ethics — A great loss for physicians, the profession, and the public; MedPage Today, November 14, 2025

 Matthew Wynia, MD, MPH, and Kayhan Parsi, JD, PhD, MedPage Today; In Memoriam: The Sudden Demise of the AMA Journal of Ethics — A great loss for physicians, the profession, and the public

"Bioethics is a small field, but we punch above our weight when it comes to writing. Professional journal articles, reports, and policies are arguably our primary written products, since the main job in bioethics is to help clinicians and others navigate ethical challenges in their work. But we also write for the public, in forums like blogs and editorials, since many of the issues we write about have broader implications. Consequently, learning to write for publication is a key skill for bioethicists, and professional journals are critical for the field. One particular journal -- the AMA Journal of Ethics -- has been a stalwart in giving a voice to newcomers to the field...

Why Did the AMA Kill its Journal of Ethics?

The AMA is the nation's largest and most influential medical professional organization, and its Journal of Ethics held the mission of, "illuminating the art of medicine" by being an open access journal, freely available to all, with no advertising, focusing each month on an important ethical issue in healthcare, and, most uniquely perhaps, each issue was edited by health professional trainees and their mentors. Only the AMA, with its mission, resources, and reach, could have produced this journal.

One possible reason for its elimination might be financial. But if financial returns were to be a metric for success, then the AMA JoE had a bad business model from the start: no fees, no subscriptions, no advertising. As Kao argued, a guiding premise for the journal was that "ethics inquiry is a public goodopens in a new tab or window" -- hence no fees or subscriptions and no ads (avoiding conflicts of interest is critical in ethics inquiry).

For the AMA, the business case for AMA JoE could never have been about profit; rather, it was about demonstrating the AMA's integrity, altruism, and service to physicians from very early in their careers. The journal aimed to build goodwill, bolster the AMA's reputation, improve ethical deliberation within the profession and, most importantly, entice students and trainees to engage seriously with the organization. By these metrics it has succeeded. Over its more than 25 years in existence, the journal drew innumerable medical students, residents, and fellows into the AMA. It also provided a crucial training ground for young people in medicine who wanted to learn about bioethics and about writing and editing, and it helped build the credibility and presence of the AMA and its ethics group nationally and internationally.

So, if it wasn't about profit, perhaps it was the political environment. The journal encouraged medical trainees to explore some of the most contentious challenges facing medicine and society, so it inherently provided opportunities for controversy. Issues this year have addressed themes of private equity in medicineopens in a new tab or windowregret and surgical professionalismopens in a new tab or window, and evidence-based design in healthcareopens in a new tab or window. Meanwhile, issues in prior years have addressed some currently inflammatory topics, like ethical issues related to transgender surgical careopens in a new tab or window and segregation in healthcareopens in a new tab or window. Remarkably, the journal still very rarely caused public relations problems for the AMA, perhaps because its editorial staff were highly qualified professionals, but also because its approach to controversy was civil, inquisitive, and exploratory.

As Kao wrote in a farewell essayopens in a new tab or window this month: "For over a quarter of a century, the AMA Journal of Ethics has striven to publish insightful commentaries, engaging podcasts, and provocative artwork that help medical students, physicians, and all health care professionals reflect on and make sound ethical decisions in service to patients and society." In fact, the journal often demonstrated exactly this spirit of respectful discussion about challenging ethical issues that we need to rekindle today, making its loss even more tragic and difficult to explain.

AMA JoE: A Value-Added Offering

In a recent opinion piece in MedPage Today, "Medical Societies Are Facing an Existential Crisis,opens in a new tab or window" the authors exhorted medical societies, facing declining memberships and engagement among young physicians, to reimagine their role by offering "free basic memberships supplemented by value-added services [that] could attract early-career physicians who might otherwise remain disengaged." AMA JoEwas exactly this type of value-added offering that not only served students and trainees, but also educators across health professions. Anecdotally, many health profession educators we know routinely use pieces from AMA JoE in their teaching and now lament its demise.

The AMA has reportedly promisedopens in a new tab or window to keep the historical content of the journal accessible on the AMA JoE website. This is no consolation for the students, residents, and fellows who were working on future issues, but it means the legacy of the journal will live on. Someday, we'd like to believe it might even be revived.

For now, we mourn the loss of AMA JoE for the field of bioethics. Even more, we mourn what the AMA's sudden elimination of its ethics journal might mean for physicians, the profession, and the public."

opens in a new tab or window(AMA JoE) -- has been a stalwart in giving a voice to newcomers to the field.

Saturday, November 15, 2025

Pope Leo XIV’s important warning on ethics of AI and new technology; The Fresno Bee, November 15, 2025

 Andrew Fiala , The Fresno Bee; Pope Leo XIV’s important warning on ethics of AI and new technology

"Recently, Pope Leo XIV addressed a conference on artificial intelligence in Rome, where he emphasized the need for deeper consideration of the “ethical and spiritual weight” of new technologies...

This begins with the insight that human beings are tool-using animals. Tools extend and amplify our operational power, and they can also either enhance or undermine who we are and what we care about. 

Whether we are enhancing or undermining our humanity ought to be the focus of moral reflection on technology.

This is a crucial question in the AI-era. The AI-revolution should lead us to ask fundamental questions about the ethical and spiritual side of technological development. AI is already changing how we think about intellectual work, such as teaching and learning. Human beings are already interacting with artificial systems that provide medical, legal, psychological and even spiritual advice. Are we prepared for all of this morally, culturally and spiritually?...

At the dawn of the age of artificial intelligence, we need a corresponding new dawn of critical moral judgment. Now is the time for philosophers, theologians and ordinary citizens to think deeply about the philosophy of technology and the values expressed or embodied in our tools. 

It will be exciting to see what the wizards of Silicon Valley will come up with next. But wizardry without wisdom is dangerous."

We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for.; The Washington Post, November 12, 2025

 

, The Washington Post; We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for.

 OpenAI has largely promoted ChatGPT as a productivity tool, and in many conversations users asked for help with practical tasks such as retrieving information. But in more than 1 in 10 of the chats The Post analyzed, people engaged the chatbot in abstract discussions, musing on topics like their ideas for breakthrough medical treatments or personal beliefs about the nature of reality.

Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work. (The Post has a content partnership with OpenAI.)...

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”"

Friday, November 14, 2025

Cleveland attorney’s use of AI in court filings raises ethical questions for legal profession; Cleveland.com, November 12, 2025

 

, Cleveland.com; Cleveland attorney’s use of AI in court filings raises ethical questions for legal profession

"A Cleveland defense attorney is under scrutiny in two counties after submitting court filings containing fabrications generated by artificial intelligence — a case that’s prompting broader questions about how lawyers are ethically navigating the use of AI tools in legal practice.

William Norman admitted that a paralegal in his office used ChatGPT to draft a motion to reopen a murder conviction appeal. The document included quotes that did not exist in the trial transcript and misrepresented statements made by the prosecutor."

AMA ethics journal shutters after 26 years; Retraction Watch, November 13, 2025

Retraction Watch; AMA ethics journal shutters after 26 years 

"The American Medical Association will cease publication of its ethics journal at the end of this year. 

The AMA Journal of Ethics, an open access, peer-reviewed journal was founded in 1999 under the name Virtual Mentor

“The loss of the AMA JoE will be most acutely felt by medical students and trainees, since it had a unique production model that included them in the process,” said Matthew Wynia, a physician and bioethicist at the University of Colorado whose work has been featured in the journal and who previously led the AMA Institute for Ethics.

The journal  publishes monthly issues on a specific theme, such as private equity in health care, antimicrobial resistance, palliative surgery and more. The journal also covered ethics in publishing and research, including a 2015 article titled “How Publish or Perish Promotes Inaccuracy in Science—and Journalism” written by Retraction Watch’s cofounder Ivan Oransky...

The journal’s website will remain online with all content freely available, “in keeping with our guiding premise that ethics inquiry is a public good,” Audiey C. Kao, editor-in-chief of the AMA Journal of Ethics and vice president of the AMA’s Ethics Group for more than two decades, wrote in a statement on the journal’s website. “With humility, I am hopeful and confident that this archived journal content will stay evergreen for years to come.”

The AMA did not provide a reason for the decision to shutter the journal." 

‘This Is the War Against Human Nature’ Paul Kingsnorth argues technology is killing us - physically and spiritually.; The New York Times, November 14, 2025

‘This Is the War Against Human Nature’: Paul Kingsnorth argues technology is killing us - physically and spiritually. 

"A lot of people, myself included, are worried about where technology is taking the human race, and especially how we can stay human in an age of artificial intelligence.

But my guest this week thinks we’re not worried enough. That some kind of apocalypse is all but inevitable — if it isn’t already upon us. That what’s needed now are strategies of resistance, endurance and escape.

And he practices what he preaches, having retreated to the west of Ireland with his family — the better to keep them out of the clutches of what he calls the machine.

But he’s come back to us, for a time, bearing a prophetic message.

Paul Kingsnorth is a novelist and a critic, an environmental activist and a convert to Eastern Orthodoxy. His new book is “Against the Machine: On the Unmaking of Humanity.”

Inside Colorado's "bullish with guardrails" AI approach; Axios, November 13, 2025

 John Frank, Ashley Gold, Axios ; Inside Colorado's "bullish with guardrails" AI approach

"Colorado's approach to integrating artificial intelligence into government functions is "bullish with guardrails."

Why it matters: Colorado offers a model for balancing AI innovation with safety, barring the technology from "anything that looks or smells or could possibly be thought of as a consequential decision," David Edinger, the state's chief information officer, told Axios in an interview.


Driving the news: The approach is a directive from Gov. Jared Polis, a former technology entrepreneur who encouraged the state's technology office to embrace AI in government.


The state's Office of Information and Technology created a framework for AI use with the NIST AI Risk Management Framework, considering the needs of different state agencies.


  • The technology is making office work and mundane tasks easier and state employees with disabilities said AI made them more productive."

‘South Park’ addresses AI-generated videos and copyright with Totoro, Trump and Bluey; The Los Angeles Times, November 13, 2025

Kaitlyn Huamani , Los Angeles Times; ‘South Park’ addresses AI-generated videos and copyright with Totoro, Trump and Bluey

"Droopy Dog, Rocky, Bullwinkle, Popeye and even the beloved preschool character Bluey are mentioned or make appearances in the episode. Representatives for Studio Ghibli also appear, offering a voice of reason in the madness, saying, “You cannot just do whatever you want with someone else’s IP.”"

Who Pays When A.I. Is Wrong?; The New York Times, November 12, 2025

 , The New York Times; Who Pays When A.I. Is Wrong?

"Search results that Gemini, Google’s artificial intelligence technology, delivered at the top of the page included the falsehoods. And mentions of a legal settlement populated automatically when they typed “Wolf River Electric” in the search box.

With cancellations piling up and their attempts to use Google’s tools to correct the issues proving fruitless, Wolf River executives decided they had no choice but to sue the tech giant for defamation.

“We put a lot of time and energy into building up a good name,” said Justin Nielsen, who founded Wolf River with three of his best friends in 2014 and helped it grow into the state’s largest solar contractor. “When customers see a red flag like that, it’s damn near impossible to win them back.”

Theirs is one of at least six defamation cases filed in the United States in the past two years over content produced by A.I. tools that generate text and images. They argue that the cutting-edge technology not only created and published false, damaging information about individuals or groups but, in many cases, continued putting it out even after the companies that built and profit from the A.I. models were made aware of the problem.

Unlike other libel or slander suits, these cases seek to define content that was not created by human beings as defamatory — a novel concept that has captivated some legal experts."

Meet chatbot Jesus: Churches tap AI to save souls — and time; Axios, November 12, 2025

 Russell Contreras , Isaac Avilucea, Axios; Meet chatbot Jesus: Churches tap AI to save souls — and time

 "A new digital awakening is unfolding in churches, where pastors and prayer apps are turning to artificial intelligence to reach worshippers, personalize sermons, and power chatbots that resemble God. 

Why it matters: AI is helping some churches stay relevant in the face of shrinking staff, empty pews and growing online audiences. But the practice raises new questions about who, or what, is guiding the flock.


  • New AI-powered apps allow you to "text with Jesus" or "talk to the Bible," giving the impression you are communicating with a deity or angel. 

  • Other apps can create personalized prayers, let you confess your sins or offer religious advice on life's decisions.

  • "What could go wrong?" Robert P. Jones, CEO of the nonpartisan Public Religion Research Institute, sarcastically asks. 

State of play: The U.S. could see an unprecedented 15,000 churches shut their doors this year as a record number of Americans (29%) now are identifying as religiously unaffiliated.


  • Megachurches are consolidating the remaining faithful, but even the most charismatic pastors struggle to offer private counseling with such large congregations.

Zoom in: In recent months, churches have been deploying chatbots to answer frequently asked questions such as service times and event details, and even to share scripture.


  • EpiscoBot, a chatbot developed by the TryTank Research Institute for the Episcopal Church, responds to spiritual or faith-related queries, drawing on church resources.

  • Other AI apps analyze congregational data (attendance and engagement) to tailor outreach and communications.

  • And more pastors are admitting that they use AI to assist in creating sermons or reduce writing time."

Thursday, November 13, 2025

AI Regulation is Not Enough. We Need AI Morals; Time, November 11, 2025

 Nicole Brachetti Peretti , Time; AI Regulation is Not Enough. We Need AI Morals

"Pope Leo XIV recently called for “builders of AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.” 

Some tech leaders, including Andreessen Horowitz cofounder Marc Andreessen have mocked such calls. But to do so is a mistake. We don’t just need AI regulation—we need AI morals." 

OpenAI copyright case reveals 'ease with which generative AI can devastate the market', says PA; The Bookseller, November 12, 2025

MATILDA BATTERSBY , The Bookseller; OpenAI copyright case reveals 'ease with which generative AI can devastate the market', says PA

"A judge’s ruling that legal action by authors against OpenAI for copyright infringement can go ahead reveals “the ease with which generative AI can devastate the market”, according to the Publishers Association (PA).

Last week, a federal judge in the US refused OpenAI’s attempts to dismiss claims by authors that text summaries of published works by ChatGPT (which is owned by OpenAI) infringes their copyrights.

The lawsuit, which is being heard in New York, brings together cases from a number of authors, as well as the Authors Guild, filed in various courts.

In his ruling, which upheld the authors’ right to attempt to sue OpenAI, District Judge Sidney Stein compared George RR Martin’s Game of Thrones to summaries of the novel created by ChatGPT.

Judge Stein said: “[A] discerning observer could easily conclude that this detailed summary is substantially similar to Martin’s original work because the summary conveys the overall tone and feel of the original work by parroting the plot, characters and themes of the original.”

The class action consolidates 12 complaints being brought against OpenAI and Microsoft. It argues copyrighted books were reproduced to train OpenAI’s artificial intelligence large language models (LLM) and, crucially, that LLMs, including ChatGPT, can infringe copyright via their output, ie the text produced when asked a question.

This landmark legal case is the first to examine whether the output of an AI chatbot infringes copyright, rather than looking at whether the training of the model was an infringement."

Wednesday, November 12, 2025

Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings; The New York Times, November 7, 2025

 , The New York Times; Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings

"Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it.

While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate.

But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes.

“These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.”...

The problem, though, keeps getting worse.

That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day.

Many lawyers, including Mr. Freund and Mr. Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.”

Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers."

Rock and Roll Hall of Fame Wins Van Halen Photo Copyright Claim; Bloomberg Law, November 11, 2025

Jennifer Kay, Bloomberg Law; Rock and Roll Hall of Fame Wins Van Halen Photo Copyright Claim

"The Rock and Roll Hall of Fame and Museum’s exhibition of a photographer’s images of Eddie Van Halen constitutes fair use and so doesn’t violate copyright laws, a federal judge said."

AI Has Sent Copyright Laws Into Chaos. What You Need to Know About Your Rights Online; CNET, November 11, 2025

Katelyn Chedraoui, CNET ; AI Has Sent Copyright Laws Into Chaos. What You Need to Know About Your Rights Online

"You might not think about copyright very often, but we are all copyright owners and authors. In the age of generative AI, copyright has quickly become one of the most important issues in the development and outputs of chatbotsimage and video generators...

What does all of this mean for the future?

Copyright owners are in a bit of a holding pattern for now. But beyond the legal and ethical implications, copyright in the age of AI raises important questions about the value of creative work, the cost of innovation and the ways in which we need or ought to have government intervention and protections."

‘This is fascist America’: Anish Kapoor may sue after border agents pose by his sculpture; The Guardian, November 12, 2025

 , The Guardian ; ‘This is fascist America’: Anish Kapoor may sue after border agents pose by his sculpture

"The artist Anish Kapoor is considering taking legal action after border patrol agents posed for a photo in front of his Cloud Gate sculpture in Chicago, saying the scene represented “fascist America”...

Kapoor took legal action against the National Rifle Association (NRA) after they used an image of Cloud Gate, which was installed in 2006 and is known locally as “the Bean”, in an advert.

He settled out of court with the NRA in 2018. “It’s a bit more complicated with this,” Kapoor said of the more recent incident, “because they’re a full, if you like, national army unit.”"

OpenAI used song lyrics in violation of copyright laws, German court says; Reuters, November 11, 2025

  and , Reuters ; OpenAI used song lyrics in violation of copyright laws, German court says

"OpenAI's chatbot ChatGPT violated German copyright laws by reproducing lyrics from songs by best-selling musician Herbert Groenemeyer and others, a court ruled on Tuesday, in a closely watched case against the U.S. firm over its use of lyrics to train its language models.

The regional court in Munich found that the company trained its AI on protected content from nine German songs, including Groenemeyer's hits "Maenner" and "Bochum"."

You’re a Computer Science Major. Don’t Panic.; The New York Times, November 12, 2025

 Mary Shaw and , The New York Times ; You’re a Computer Science Major. Don’t Panic.

"The future of computer science education is to teach students how to master the indispensable skill of supervision.

Why? Because the speed and efficiency of using A.I. to write code is balanced by the reality that it often gets things wrong. These tools are designed to produce results that look convincing, but may still contain errors. A recent survey showed that over half of professional developers use A.I. tools daily, but only about one-third trust their accuracy. When asked what their greatest frustration is about using A.I. tools, two-thirds of respondents answered, “A.I. solutions that are almost right but not quite.”

There is still a need for humans to play a role in coding — a supervisory one, where programmers oversee the use of A.I. tools, determine if A.I.-generated code does what it is supposed to do and make essential repairs to defective code."

Federal Cuts, Immigration Raids and a Slowing Economy Hit Rural Libraries; The New York Times, November 12, 2025

 

, The New York Times; Federal Cuts, Immigration Raids and a Slowing Economy Hit Rural Libraries

"“A library is in a lot of ways a kind of civic symbol, a demonstration of a community’s commitment to itself. So what does it mean if that goes away?”"

Tuesday, November 11, 2025

AI country singer Breaking Rust tops Billboard with ‘Walk My Walk’; San Francisco Chronicle, November 10, 2025

Aidin Vaziri, San Francisco Chronicle; AI country singer Breaking Rust tops Billboard with ‘Walk My Walk’

"A country hit made by artificial intelligence has climbed to the top of a Billboard chart — a first for the genre.

The song, “Walk My Walk,” by an artist known as Breaking Rust, is now No. 1 on  Billboard’s Country Digital Song Sales chart. But the brooding, gravel-voiced cowboy behind the hit doesn’t exist. At least, not in the traditional sense. 

He’s an AI creation with millions of streams, tens of thousands of followers and no verifiable human footprint." 

Pitt School of Medicine Student Innovator is Empowering People to Take Charge of Their Healthcare; University of Pittsburgh Office of Innovation & Entrepreneurship, October 21, 2025

KAREN WOOLSTRUM , University of Pittsburgh Office of Innovation & Entrepreneurship; Pitt School of Medicine Student Innovator is Empowering People to Take Charge of Their Healthcare

"Inspiration Strikes in the ER

While her research focuses on cystic fibrosis, Li’s entrepreneurial journey began during a rotation in the emergency room. It dawned on her that many patients in the ER could be empowered to take control of their own health monitoring and potentially avoid traumatic and costly ER visits. She quickly devised an idea for an electronic stethoscope that people can use to measure vital signs of the heart and lungs from home.

In collaboration with a friend, Akshaya Anand, a machine-learning graduate student from the University of Maryland, she founded Korion Health and entered the 2022 Randall Family Big Idea Competition hosted by the Big Idea Center, Pitt’s hub for student innovation (part of the OIE).

They were awarded a modest $2,000 4th-place prize, but the value they received from the month-long competition and mentorship extended far beyond that. The experience of crafting her pitch and having her idea validated in the eyes of experienced entrepreneurs gave her the confidence to continue pursuing the device’s commercial potential.

Next up was a pitch competition hosted by the Product Development Managers Association (PDMA) in which she won free first place in the graduate-student category, with the award including consulting hours from local companies such as Bally Design and Lexicon Design that she said “helped me take my half-baked idea and turn it into a prototype to show to investors.”

“This was a high yield for the effort. If it’s something they can hold in their hands it really helps communicate the value proposition,” she added.

From there, things began to snowball. On the same day that she won the UpPrize Social Innovation Competition sponsored by Bank of New York in the racial equity category ($75k), she won the first place prize from the American Heart Association’s EmPOWERED to Serve Business Accelerator ($50k). The resulting publicity attracted the attention of organizers of the Hult Prize Competition, a global student startup competition that receives thousands of applicants each year, who invited her to apply.

“I didn’t know anything about the Hult Prize competition. At first, I thought it was spam,” she admitted.

She had no illusions of advancing to the finals near London, let alone winning the top prize of $1 million: until she did."

Sunday, November 9, 2025

California Prosecutor Says AI Caused Errors in Criminal Case; Sacramento Bee via Government Technology, November 7, 2025

 Sharon Bernstein, Sacramento Bee via Government Technology; California Prosecutor Says AI Caused Errors in Criminal Case

"Northern California prosecutors used artificial intelligence to write a criminal court filing that contained references to nonexistent legal cases and precedents, Nevada County District Attorney Jesse Wilson said in a statement.

The motion included false information known in artificial intelligence circles as “hallucinations,” meaning that it was invented by the AI software asked to write the material, Wilson said. It was filed in connection with the case of Kalen Turner, who was accused of five felony and two misdemeanor drug counts, he said.

The situation is the latest example of the potential pitfalls connected with the growing use of AI. In fields such as law, errors in AI-generated briefs could impact the freedom of a person accused of a crime. In health care, AI analysis of medical necessity has resulted in the denial of some types of care. In April, A 16-year-old Rancho Santa Margarita boy killed himself after discussing suicidal thoughts with an AI chatbot, prompting a new California law aimed at protecting vulnerable users.

“While artificial intelligence can be a useful research tool, it remains an evolving technology with limitations — including the potential to generate ‘hallucinated’ citations,” Wilson said. “We are actively learning the fluid dynamics of AI-assisted legal work and its possible pitfalls.”