Katherine Blunt , The Wall Street Journal ; AI Is Getting Smarter. Catching Its Mistakes Is Getting Harder.
My Bloomsbury book "Ethics, Information, and Technology" was published on Nov. 13, 2025. Purchases can be made via Amazon and this Bloomsbury webpage: https://www.bloomsbury.com/us/ethics-information-and-technology-9781440856662/
Katherine Blunt , The Wall Street Journal ; AI Is Getting Smarter. Catching Its Mistakes Is Getting Harder.
AMANDA ROBERT, ABA Journal ; Sanctions ramping up in cases involving AI hallucinations
"The use of monetary sanctions against attorneys is seemingly on the rise as courts continue to address artificial intelligence-generated hallucinations in case documents."
Amanda Silberling, TechCrunch; The dictionary sues OpenAI
"Encyclopedia Britannica and Merriam-Webster have filed a lawsuit against OpenAI, alleging in its complaint that the AI giant has committed “massive copyright infringement.”
Britannica, which owns Merriam-Webster, retains the copyright to nearly 100,000 online articles, which have been scraped and used to train OpenAI’s LLMs without permission, the publisher alleges in the lawsuit.
Britannica also accuses OpenAI of violating copyright laws when it generates outputs that contain “full or partial verbatim reproductions” of its content and when the AI lab uses its articles in ChatGPT’s RAG (retrieval augmented generation) workflow. OpenAI’s RAG tool is how the LLM scans the web or other databases for newly updated information when responding to a query. Britannica also alleges that OpenAI violates the Lanham Act, a trademark statute, when it generates made-up hallucinations and attributes them falsely to the publisher."
Emily Peck, Axios; AI legal advice is driving lawyers bananas
"AI promises to make work more productive for lawyers, but there's a problem: Their clients are using it, too.
Why it matters: The rise of AI is creating new headaches for attorneys: They're worried about the fate of the billable hour, a reliable profit center for aeons, and are perturbed by clients getting bad legal advice from chatbots.
Zoom in: "It's like the WebMD effect on steroids," says Dave Jochnowitz, a partner at the law firm Outten & Golden, referring to how medical websites can give people a misguided understanding of their condition."
Rhys Blakely, The Times ; Publisher under fire after ‘fake’ citations found in AI ethics guide
"One of the world’s largest academic publishers is selling a book on the ethics of AI intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.
Academic publishing has recently been subject to criticism for accepting fraudulent papers produced using AI, which have made it through a peer-review process designed to guarantee high standards.
The Times found that a book recently published by the German-British publishing giant Springer Nature includes dozens of citations that appear to have been invented — a sign, often, of AI-generated material."
Lisa Parlagreco, International Business Times; AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures
"When segments of our profession begin treating AI outputs as inherently reliable, we normalize a lower threshold of scrutiny, and the law cannot function on lowered standards. The justice system depends on precision, on careful reading, on the willingness to challenge assumptions rather than accept the quickest answer. If lawyers become comfortable skipping that intellectual step, even once, we begin to erode the habits that make rigorous advocacy possible. The harm is not just procedural; it's generational. New lawyers watch what experienced lawyers do, not what they say, and if they see shortcuts rewarded rather than corrected, that becomes the new baseline.
This is not to suggest that AI has no place in law. When used responsibly, with human oversight, it can be a powerful tool. Legal teams are successfully incorporating AI into tasks like document review, contract analysis, and litigation preparation. In complex cases with tens of thousands of documents, AI has helped accelerate discovery and flag issues that humans might overlook. In academia as well, AI has shown promise in grading essays and providing feedback that can help educate the next generation of lawyers, but again, under human supervision.
The key distinction is between augmentation and automation. We must not be naive about what AI represents. It is not a lawyer. It doesn't hold professional responsibility. It doesn't understand nuance, ethics, or the weight of a client's freedom or financial well-being. It generates outputs based on patterns and statistical likelihoods. That's incredibly useful for ideation, summarization, and efficiency, but it is fundamentally unsuited to replace human reasoning.
To ignore this reality is to surrender the core values of our profession. Lawyers are trained not just to know the law but to apply it with judgment, integrity, and a commitment to truth. Practices that depend on AI without meaningful human oversight communicate a lack of diligence and care. They weaken public trust in our profession at a time when that trust matters more than ever.
We should also be thinking about how we prepare future lawyers. Law schools and firms must lead by example, teaching students not just how to use AI, but how to question it. They must emphasize that AI outputs require verification, context, and critical thinking. AI should supplement legal education, not substitute it. The work of a lawyer begins long before generating a draft; it begins with curiosity, skepticism, and the courage to ask the right questions.
And yes, regulation has its place. Many courts and bar associations are already developing guidelines for the responsible use of AI. These frameworks encourage transparency, require lawyers to verify any AI-assisted research, and emphasize the ethical obligations that cannot be delegated to a machine. That's progress, but it needs broader adoption and consistent enforcement.
At the end of the day, technology should push us forward, not backward. AI can make our work more efficient, but it cannot, and should not, replace our judgment. The lawyer who delegates their thinking to an algorithm risks their profession, their client's case, and the integrity of the justice system itself."
Retraction Watch ; The case of the fake references in an ethics journal
"Many would-be whistleblowers write to us about papers with nonexistent references, possibly hallucinated by artificial intelligence. One reader recently alerted us to fake references in … an ethics journal. In an article about whistleblowing.
The paper, published in April in the Journal of Academic Ethics, explored “the whistleblowing experiences of individuals with disabilities in Ethiopian public educational institutions.”
Erja Moore, an independent researcher based in Finland, came across the article while looking into a whistleblowing case in that country. “I started reading this article and found some interesting references that I decided to read as well,” Moore told Retraction Watch. “To my surprise, those articles didn’t exist.”...
The Journal of Academic Ethics is published by Springer Nature. Eleven of the fabricated references cite papers in the Journal of Business Ethics — another Springer Nature title.
“On one hand this is hilarious that an ethics journal publishes this, but on the other hand it seems that this is a much bigger problem in publishing and we can’t really trust scientific articles any more,” Moore said."
Shaila Dewan, The New York Times ; Prosecutor Used Flawed A.I. to Keep a Man in Jail, His Lawyers Say
"On Friday, the lawyers were joined by a group of 22 legal and technology scholars who warned that the unchecked use of A.I. could lead to wrongful convictions. The group, which filed its own brief with the state Supreme Court, included Barry Scheck, a co-founder of the Innocence Project, which has helped to exonerate more than 250 people; Chesa Boudin, a former district attorney of San Francisco; and Katherine Judson, executive director of the Center for Integrity in Forensic Sciences, a nonprofit that seeks to improve the reliability of criminal prosecutions.
The problem of A.I.-generated errors in legal papers has burgeoned along with the popular use of tools like ChatGPT and Gemini, which can perform a wide range of tasks, including writing emails, term papers and legal briefs. Lawyers and even judges have been caught filing court papers that were rife with fake legal references and faulty arguments, leading to embarrassment and sometimes hefty fines.
The Kjoller case, though, is one of the first in which prosecutors, whose words carry great sway with judges and juries, have been accused of using A.I. without proper safeguards...
Lawyers are not prohibited from using A.I., but they are required to ensure that their briefs, however they are written, are accurate and faithful to the law. Today’s artificial intelligence tools are known to sometimes “hallucinate,” or make things up, especially when asked complex legal questions...
Westlaw executives said that their A.I. tool does not write legal briefs, because they believe A.I. is not yet capable of the complex reasoning needed to do so...
Damien Charlotin, a senior researcher at HEC Paris, maintains a database that includes more than 590 cases from around the world in which courts and tribunals have detected hallucinated content. More than half involved people who represented themselves in court. Two-thirds of the cases were in United States courts. Only one, an Israeli case, involved A.I. use by a prosecutor."
Gabe Castro-Root, The New York Times ; What Is Agentic A.I., and Would You Trust It to Book a Flight?
"A bot may soon be booking your vacation.
Millions of travelers already use artificial intelligence to compare options for flights, hotels, rental cars and more. About 30 percent of U.S. travelers say they’re comfortable using A.I. to plan a trip. But these tools are about to take a big step.
Agentic A.I., a rapidly emerging type of artificial intelligence, will be able to find and pay for reservations with limited human involvement, developers say. Companies like Expedia, Google, Kayak and Priceline are experimenting with or rolling out agentic A.I. tools.
Travelers using agentic A.I. would set parameters like dates and a price range for their travel plans, then hand over their credit card information to the bot, which would monitor prices and book on their behalf...
Think of agentic A.I. as a personal assistant, said Shilpa Ranganathan, the chief product officer at Expedia Group, which is developing both generative and agentic A.I. trip-planning tools.
While the more familiar generative A.I. can summarize information and answer questions, agentic tools can carry out tasks. Travelers benefit by deputizing these tools to perform time-consuming chores like tracking flight prices."
Evan Gorelick , The New York Times; Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings
"Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it.
While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate.
But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes.
“These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.”...
The problem, though, keeps getting worse.
That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.
Initially he found three or four examples a month. Now he often receives that many in a day.
Many lawyers, including Mr. Freund and Mr. Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.”
Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers."
Mary Shaw and Michael Hilton, The New York Times ; You’re a Computer Science Major. Don’t Panic.
"The future of computer science education is to teach students how to master the indispensable skill of supervision.
Why? Because the speed and efficiency of using A.I. to write code is balanced by the reality that it often gets things wrong. These tools are designed to produce results that look convincing, but may still contain errors. A recent survey showed that over half of professional developers use A.I. tools daily, but only about one-third trust their accuracy. When asked what their greatest frustration is about using A.I. tools, two-thirds of respondents answered, “A.I. solutions that are almost right but not quite.”
There is still a need for humans to play a role in coding — a supervisory one, where programmers oversee the use of A.I. tools, determine if A.I.-generated code does what it is supposed to do and make essential repairs to defective code."
Sharon Bernstein, Sacramento Bee via Government Technology; California Prosecutor Says AI Caused Errors in Criminal Case
"Northern California prosecutors used artificial intelligence to write a criminal court filing that contained references to nonexistent legal cases and precedents, Nevada County District Attorney Jesse Wilson said in a statement.
The motion included false information known in artificial intelligence circles as “hallucinations,” meaning that it was invented by the AI software asked to write the material, Wilson said. It was filed in connection with the case of Kalen Turner, who was accused of five felony and two misdemeanor drug counts, he said.
The situation is the latest example of the potential pitfalls connected with the growing use of AI. In fields such as law, errors in AI-generated briefs could impact the freedom of a person accused of a crime. In health care, AI analysis of medical necessity has resulted in the denial of some types of care. In April, A 16-year-old Rancho Santa Margarita boy killed himself after discussing suicidal thoughts with an AI chatbot, prompting a new California law aimed at protecting vulnerable users.
“While artificial intelligence can be a useful research tool, it remains an evolving technology with limitations — including the potential to generate ‘hallucinated’ citations,” Wilson said. “We are actively learning the fluid dynamics of AI-assisted legal work and its possible pitfalls.”
BENJ EDWARDS, Ars Technica ; Education report calling for ethical AI use contains over 15 fake sources
"On Friday, CBC News reported that a major education reform document prepared for the Canadian province of Newfoundland and Labrador contains at least 15 fabricated citations that academics suspect were generated by an AI language model—despite the same report calling for "ethical" AI use in schools.
"A Vision for the Future: Transforming and Modernizing Education," released August 28, serves as a 10-year roadmap for modernizing the province's public schools and post-secondary institutions. The 418-page document took 18 months to complete and was unveiled by co-chairs Anne Burke and Karen Goodnough, both professors at Memorial University's Faculty of Education, alongside Education Minister Bernard Davis...
The presence of potentially AI-generated fake citations becomes especially awkward given that one of the report's 110 recommendations specifically states the provincial government should "provide learners and educators with essential AI knowledge, including ethics, data privacy, and responsible technology use."
Sarah Martin, a Memorial political science professor who spent days reviewing the document, discovered multiple fabricated citations. "Around the references I cannot find, I can't imagine another explanation," she told CBC. "You're like, 'This has to be right, this can't not be.' This is a citation in a very important document for educational policy.""
Anna Washenko, Engadget; Perplexity's definition of copyright gets it sued by the dictionary
MARTY LEVINE, University Times; PittGPT debuts today as private AI source for University
"Today marks the rollout of PittGPT, Pitt’s own generative AI for staff and faculty — a service that will be able to use Pitt’s sensitive, internal data in isolation from the Internet because it works only for those logging in with their Pitt ID.
“We want to be able to use AI to improve the things that we do” in our Pitt work, said Dwight Helfrich, director of the Pitt enterprise initiatives team at Pitt Digital. That means securely adding Pitt’s private information to PittGPT, including Human Resources, payroll and student data. However, he explains, in PittGPT “you would only have access to data that you would have access to in your daily role” — in your specific Pitt job.
“Security is a key part of AI,” he said. “It is much more important in AI than in other tools we provide.” Using PittGPT — as opposed to the other AI services available to Pitt employees — means that any data submitted to it “stays in our environment and it is not used to train a free AI model.”
Helfrich also emphasizes that “you should get a very similar response to PittGPT as you would get with ChatGPT,” since PittGPT had access to “the best LLM’s on the market” — the large language models used to train AI.
Faculty, staff and students already have free access to such AI services as Google Gemini and Microsoft Copilot. And “any generative AI tool provides the ability to analyze data … and to rewrite things” that are still in early or incomplete drafts, Helfrich said.
“It can help take the burden off some of the work we have to do in our lives” and help us focus on the larger tasks that, so far, humans are better at undertaking, added Pitt Digital spokesperson Brady Lutsko. “When you are working with your own information, you can tell it what to include” — it won’t add misinformation from the internet or its own programming, as AI sometimes does. “If you have a draft, it will make your good work even better.”
“The human still needs to review and evaluate that this is useful and valuable,” Helfrich said of AI’s contribution to our work. “At this point we can say that there is nothing in AI that is 100 percent reliable.”
On the other hand, he said, “they’re making dramatic enhancements at a pace we’ve never seen in technology. … I’ve been in technology 30 years and I’ve never seen anything improve as quickly as AI.” In his own work, he said, “AI can help review code and provide test cases, reducing work time by 75 percent. You just have to look at it with some caution and just (verify) things.”
“Treat it like you’re having a conversation with someone you’ve just met,” Lutsko added. “You have some skepticism — you go back and do some fact checking.”
Lutsko emphasized that the University has guidance on Acceptable Use of Generative Artificial Intelligence Tools as well as a University-Approved GenAI Tools List.
Pitt’s list of approved generative AI tools includes Microsoft 365 Copilot Chat, which is available to all students, faculty and staff (as opposed to the version of Copilot built into Microsoft 365 apps, which is an add-on available to departments through Panther Express for $30 per month, per person); Google Gemini; and Google NotebookLM, which Lutsko said “serves as a dedicated research assistant for precise analysis using user-provided documents.”
PittGPT joins that list today, Helfrich said.
Pitt also has been piloting Pitt AI Connect, a tool for researchers to integrate AI into software development (using an API, or application programming interface).
And Pitt also is already deploying the PantherAI chatbot, clickable from the bottom right of the Pitt Digital and Office of Human Resources homepages, which provides answers to common questions that may otherwise be deep within Pitt’s webpages. It will likely be offered on other Pitt websites in the future.
“Dive in and use it,” Helfrich said of PittGPT. “I see huge benefits from all of the generative AI tools we have. I’ve saved time and produced better results.”"
MATT WHITE , Task & Purpose; The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies
"The Pentagon announced Monday it is going to spend almost $1 billion on “agentic AI workflows” from four “frontier AI” companies, including Elon Musk’s xAI, whose flagship Grok appeared to still be declaring itself “MechaHitler” as late as Monday afternoon.
In a press release, the Defense Department’s Chief Digital and Artificial Intelligence Office — or CDAO — said it will cut checks of up to $200 million each to tech giants Anthropic, Google, OpenAI and Musk’s xAI to work on:
The release did not expand on what any of that means or how AI might help. Task & Purpose reached out to the Pentagon for details on what these AI agents may soon be doing and asked specifically if the contracts would include control of live weapons systems or classified information."
Brandon Vigliarolo, The Register; US patent office wants an AI to scan for prior art, but doesn't want to pay for it
"There is some irony in using AI bots, which are often trained on copyrighted material for which AI firms have shown little regard, to assess the validity of new patents.
It may not be the panacea the USPTO is hoping for. Lawyers have been embracing AI for something very similar - scanning particular, formal documentation for specific details related to a new analysis - and it's sometimes backfired as the AI has gotten certain details wrong. The Register has reported on numerous instances of legal professionals practically begging to be sanctioned for not bothering to do their legwork, as judges caught them using AI, which borked citations to other legal cases.
The risk of hallucinating patents that don't exist, or getting patent numbers or other details wrong, means that there'll have to be at least some human oversight. The USPTO had no comment on how this might be accomplished."
JOE HUTCHISON FOR DAILYMAIL.COM; Excruciating reason Utah lawyer presented FAKE case in court after idiotic blunder
"The case referenced, according to documents, was 'Royer v. Nelson' which did not exist in any legal database and was found to be made up by ChatGPT.
Opposing counsel said that the only way they would find any mention of the case was by using the AI.
They even went as far as to ask the AI if the case was real, noting in a filing that it then apologized and said it was a mistake.
Bednar's attorney, Matthew Barneck, said that the research was done by a clerk and Bednar took all responsibility for failing to review the cases.
He told The Salt Lake Tribune: 'That was his mistake. He owned up to it and authorized me to say that and fell on the sword."
Talya Minsberg , The New York Times; A.I.-Generated Reading List in Chicago Sun-Times Recommends Nonexistent Books
"The summer reading list tucked into a special section of The Chicago Sun-Times and The Philadelphia Inquirer seemed innocuous enough.
There were books by beloved authors such as Isabel Allende and Min Jin Lee; novels by best sellers including Delia Owens, Taylor Jenkins Reid and Brit Bennett; and a novel by Percival Everett, a recent Pulitzer Prize winner.
There was just one issue: None of the book titles attributed to the above authors were real. They had been created by generative artificial intelligence.
It’s the latest case of bad A.I. making its way into the news. While generative A.I. has improved, there is still no way to ensure the systems produce accurate information. A.I. chatbots cannot distinguish between what is true and what is false, and they often make things up. The chatbots can spit out information and expert names with an air of authority."