Showing posts with label AI hallucinations. Show all posts
Showing posts with label AI hallucinations. Show all posts

Wednesday, November 12, 2025

Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings; The New York Times, November 7, 2025

 , The New York Times; Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings

"Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it.

While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate.

But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes.

“These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.”...

The problem, though, keeps getting worse.

That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day.

Many lawyers, including Mr. Freund and Mr. Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.”

Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers."

You’re a Computer Science Major. Don’t Panic.; The New York Times, November 12, 2025

 Mary Shaw and , The New York Times ; You’re a Computer Science Major. Don’t Panic.

"The future of computer science education is to teach students how to master the indispensable skill of supervision.

Why? Because the speed and efficiency of using A.I. to write code is balanced by the reality that it often gets things wrong. These tools are designed to produce results that look convincing, but may still contain errors. A recent survey showed that over half of professional developers use A.I. tools daily, but only about one-third trust their accuracy. When asked what their greatest frustration is about using A.I. tools, two-thirds of respondents answered, “A.I. solutions that are almost right but not quite.”

There is still a need for humans to play a role in coding — a supervisory one, where programmers oversee the use of A.I. tools, determine if A.I.-generated code does what it is supposed to do and make essential repairs to defective code."

Sunday, November 9, 2025

California Prosecutor Says AI Caused Errors in Criminal Case; Sacramento Bee via Government Technology, November 7, 2025

 Sharon Bernstein, Sacramento Bee via Government Technology; California Prosecutor Says AI Caused Errors in Criminal Case

"Northern California prosecutors used artificial intelligence to write a criminal court filing that contained references to nonexistent legal cases and precedents, Nevada County District Attorney Jesse Wilson said in a statement.

The motion included false information known in artificial intelligence circles as “hallucinations,” meaning that it was invented by the AI software asked to write the material, Wilson said. It was filed in connection with the case of Kalen Turner, who was accused of five felony and two misdemeanor drug counts, he said.

The situation is the latest example of the potential pitfalls connected with the growing use of AI. In fields such as law, errors in AI-generated briefs could impact the freedom of a person accused of a crime. In health care, AI analysis of medical necessity has resulted in the denial of some types of care. In April, A 16-year-old Rancho Santa Margarita boy killed himself after discussing suicidal thoughts with an AI chatbot, prompting a new California law aimed at protecting vulnerable users.

“While artificial intelligence can be a useful research tool, it remains an evolving technology with limitations — including the potential to generate ‘hallucinated’ citations,” Wilson said. “We are actively learning the fluid dynamics of AI-assisted legal work and its possible pitfalls.”

Sunday, September 28, 2025

Education report calling for ethical AI use contains over 15 fake sources; Ars Technica, September 12, 2025

 BENJ EDWARDS, Ars Technica ; Education report calling for ethical AI use contains over 15 fake sources

"On Friday, CBC News reported that a major education reform document prepared for the Canadian province of Newfoundland and Labrador contains at least 15 fabricated citations that academics suspect were generated by an AI language model—despite the same report calling for "ethical" AI use in schools.

"A Vision for the Future: Transforming and Modernizing Education," released August 28, serves as a 10-year roadmap for modernizing the province's public schools and post-secondary institutions. The 418-page document took 18 months to complete and was unveiled by co-chairs Anne Burke and Karen Goodnough, both professors at Memorial University's Faculty of Education, alongside Education Minister Bernard Davis...

The irony runs deep

The presence of potentially AI-generated fake citations becomes especially awkward given that one of the report's 110 recommendations specifically states the provincial government should "provide learners and educators with essential AI knowledge, including ethics, data privacy, and responsible technology use."

Sarah Martin, a Memorial political science professor who spent days reviewing the document, discovered multiple fabricated citations. "Around the references I cannot find, I can't imagine another explanation," she told CBC. "You're like, 'This has to be right, this can't not be.' This is a citation in a very important document for educational policy.""

Saturday, September 13, 2025

Perplexity's definition of copyright gets it sued by the dictionary; Engadget, September 11, 2025

 Anna Washenko, Engadget; Perplexity's definition of copyright gets it sued by the dictionary

"Merriam-Webster and its parent company Encyclopedia Britannica are the latest to take on AI in court. The plaintiffs have sued Perplexity, claiming that AI company's "answer engine" product unlawfully copies their copyrighted materials. They are also alleging copyright infringement for instances where Perplexity's AI creates false or inaccurate hallucinations that it then wrongly attributes to Britannica or Merriam-Webster. The complaint, filed in New York federal court, is seeking unspecified monetary damages and an order that blocks Perplexity from misusing their content."

Saturday, August 23, 2025

PittGPT debuts today as private AI source for University; University Times, August 21, 2025

 MARTY LEVINE, University Times; PittGPT debuts today as private AI source for University

"Today marks the rollout of PittGPT, Pitt’s own generative AI for staff and faculty — a service that will be able to use Pitt’s sensitive, internal data in isolation from the Internet because it works only for those logging in with their Pitt ID.

“We want to be able to use AI to improve the things that we do” in our Pitt work, said Dwight Helfrich, director of the Pitt enterprise initiatives team at Pitt Digital. That means securely adding Pitt’s private information to PittGPT, including Human Resources, payroll and student data. However, he explains, in PittGPT “you would only have access to data that you would have access to in your daily role” — in your specific Pitt job.

“Security is a key part of AI,” he said. “It is much more important in AI than in other tools we provide.” Using PittGPT — as opposed to the other AI services available to Pitt employees — means that any data submitted to it “stays in our environment and it is not used to train a free AI model.”

Helfrich also emphasizes that “you should get a very similar response to PittGPT as you would get with ChatGPT,” since PittGPT had access to “the best LLM’s on the market” — the large language models used to train AI.

Faculty, staff and students already have free access to such AI services as Google Gemini and Microsoft Copilot. And “any generative AI tool provides the ability to analyze data … and to rewrite things” that are still in early or incomplete drafts, Helfrich said.

“It can help take the burden off some of the work we have to do in our lives” and help us focus on the larger tasks that, so far, humans are better at undertaking, added Pitt Digital spokesperson Brady Lutsko. “When you are working with your own information, you can tell it what to include” — it won’t add misinformation from the internet or its own programming, as AI sometimes does. “If you have a draft, it will make your good work even better.”

“The human still needs to review and evaluate that this is useful and valuable,” Helfrich said of AI’s contribution to our work. “At this point we can say that there is nothing in AI that is 100 percent reliable.”

On the other hand, he said, “they’re making dramatic enhancements at a pace we’ve never seen in technology. … I’ve been in technology 30 years and I’ve never seen anything improve as quickly as AI.” In his own work, he said, “AI can help review code and provide test cases, reducing work time by 75 percent. You just have to look at it with some caution and just (verify) things.”

“Treat it like you’re having a conversation with someone you’ve just met,” Lutsko added. “You have some skepticism — you go back and do some fact checking.”

Lutsko emphasized that the University has guidance on Acceptable Use of Generative Artificial Intelligence Tools as well as a University-Approved GenAI Tools List.

Pitt’s list of approved generative AI tools includes Microsoft 365 Copilot Chat, which is available to all students, faculty and staff (as opposed to the version of Copilot built into Microsoft 365 apps, which is an add-on available to departments through Panther Express for $30 per month, per person); Google Gemini; and Google NotebookLMwhich Lutsko said “serves as a dedicated research assistant for precise analysis using user-provided documents.”

PittGPT joins that list today, Helfrich said.

Pitt also has been piloting Pitt AI Connect, a tool for researchers to integrate AI into software development (using an API, or application programming interface).

And Pitt also is already deploying the PantherAI chatbot, clickable from the bottom right of the Pitt Digital and Office of Human Resources homepages, which provides answers to common questions that may otherwise be deep within Pitt’s webpages. It will likely be offered on other Pitt websites in the future.

“Dive in and use it,” Helfrich said of PittGPT. “I see huge benefits from all of the generative AI tools we have. I’ve saved time and produced better results.”"

Wednesday, July 16, 2025

The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies; Task & Purpose, July 14, 2025

  , Task & Purpose; The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies

"The Pentagon announced Monday it is going to spend almost $1 billion on “agentic AI workflows” from four “frontier AI” companies, including Elon Musk’s xAI, whose flagship Grok appeared to still be declaring itself “MechaHitler” as late as Monday afternoon.

In a press release, the Defense Department’s Chief Digital and Artificial Intelligence Office — or CDAO — said it will cut checks of up to $200 million each to tech giants Anthropic, Google, OpenAI and Musk’s xAI to work on:

  • “critical national security challenges;”
  • “joint mission essential tasks in our warfighting domain;”
  • “DoD use cases.”

The release did not expand on what any of that means or how AI might help. Task & Purpose reached out to the Pentagon for details on what these AI agents may soon be doing and asked specifically if the contracts would include control of live weapons systems or classified information."

Saturday, June 21, 2025

US patent office wants an AI to scan for prior art, but doesn't want to pay for it; The Register, June 20, 2025

 Brandon Vigliarolo,  The Register; US patent office wants an AI to scan for prior art, but doesn't want to pay for it

"There is some irony in using AI bots, which are often trained on copyrighted material for which AI firms have shown little regard, to assess the validity of new patents. 

It may not be the panacea the USPTO is hoping for. Lawyers have been embracing AI for something very similar - scanning particular, formal documentation for specific details related to a new analysis - and it's sometimes backfired as the AI has gotten certain details wrong. The Register has reported on numerous instances of legal professionals practically begging to be sanctioned for not bothering to do their legwork, as judges caught them using AI, which borked citations to other legal cases. 

The risk of hallucinating patents that don't exist, or getting patent numbers or other details wrong, means that there'll have to be at least some human oversight. The USPTO had no comment on how this might be accomplished."

Monday, June 2, 2025

Excruciating reason Utah lawyer presented FAKE case in court after idiotic blunder; Daily Mail, May 31, 2025

 JOE HUTCHISON FOR DAILYMAIL.COMExcruciating reason Utah lawyer presented FAKE case in court after idiotic blunder

"The case referenced, according to documents, was 'Royer v. Nelson' which did not exist in any legal database and was found to be made up by ChatGPT.

Opposing counsel said that the only way they would find any mention of the case was by using the AI

They even went as far as to ask the AI if the case was real, noting in a filing that it then apologized and said it was a mistake.

Bednar's attorney, Matthew Barneck, said that the research was done by a clerk and Bednar took all responsibility for failing to review the cases.

He told The Salt Lake Tribune: 'That was his mistake. He owned up to it and authorized me to say that and fell on the sword."

Thursday, May 22, 2025

A.I.-Generated Reading List in Chicago Sun-Times Recommends Nonexistent Books; The New York Times, May 21, 2025

  , The New York Times; A.I.-Generated Reading List in Chicago Sun-Times Recommends Nonexistent Books

"The summer reading list tucked into a special section of The Chicago Sun-Times and The Philadelphia Inquirer seemed innocuous enough.

There were books by beloved authors such as Isabel Allende and Min Jin Lee; novels by best sellers including Delia Owens, Taylor Jenkins Reid and Brit Bennett; and a novel by Percival Everett, a recent Pulitzer Prize winner.

There was just one issue: None of the book titles attributed to the above authors were real. They had been created by generative artificial intelligence.

It’s the latest case of bad A.I. making its way into the news. While generative A.I. has improved, there is still no way to ensure the systems produce accurate information. A.I. chatbots cannot distinguish between what is true and what is false, and they often make things up. The chatbots can spit out information and expert names with an air of authority."

Saturday, May 17, 2025

Anthropic’s law firm throws Claude under the bus over citation errors in court filing; The Register, May 15, 2025

 Thomas Claburn, The Register; Anthropic’s law firm throws Claude under the bus over citation errors in court filing

"An attorney defending AI firm Anthropic in a copyright case brought by music publishers apologized to the court on Thursday for citation errors that slipped into a filing after using the biz's own AI tool, Claude, to format references.

The incident reinforces what's becoming a pattern in legal tech: while AI models can be fine-tuned, people keep failing to verify the chatbot's output, despite the consequences.

The flawed citations, or "hallucinations," appeared in an April 30, 2025 declaration [PDF] from Anthropic data scientist Olivia Chen in a copyright lawsuit music publishers filed in October 2023.

But Chen was not responsible for introducing the errors, which appeared in footnotes 2 and 3.

Ivana Dukanovic, an attorney with Latham & Watkins, the firm defending Anthropic, stated that after a colleague located a supporting source for Chen's testimony via Google search, she used Anthropic's Claude model to generate a formatted legal citation. Chen and defense lawyers failed to catch the errors in subsequent proofreading.

"After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article," explained Dukanovic in her May 15, 2025 declaration [PDF].

"Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors.

"Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai."...

The hallucinations of AI models keep showing up in court filings.

Last week, in a plaintiff's claim against insurance firm State Farm (Jacquelyn Jackie Lacey v. State Farm General Insurance Company et al), former Judge Michael R. Wilner, the Special Master appointed to handle the dispute, sanctioned [PDF] the plaintiff's attorneys for misleading him with AI-generated text. He directed the plaintiff's legal team to pay more than $30,000 in court costs that they wouldn't have otherwise had to bear.

After reviewing a supplemental brief filed by the plaintiffs, Wilner found that "approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way."

Two of the citations, he said, do not exist, and several cited phony judicial opinions."

Thursday, May 15, 2025

Anthropic expert accused of using AI-fabricated source in copyright case; Reuters, May 13, 2025

  , Reuters; Anthropic expert accused of using AI-fabricated source in copyright case

"Van Keulen asked Anthropic to respond by Thursday to the accusation, which the company said appeared to be an inadvertent citation error. He rejected the music companies' request to immediately question the expert but said the allegation presented "a very serious and grave issue," and that there was "a world of difference between a missed citation and a hallucination generated by AI.""

Monday, February 3, 2025

DeepSeek has ripped away AI’s veil of mystique. That’s the real reason the tech bros fear it; The Observer via The Guardian, February 2, 2025

 , The Observer via The Guardian ; DeepSeek has ripped away AI’s veil of mystique. That’s the real reason the tech bros fear it

"DeepSeek, sponsored by a Chinese hedge fund, is a notable achievement. Technically, though, it is no advance on large language models (LLMs) that already exist. It is neither faster nor “cleverer” than OpenAI’s ChatGPT or Anthropic’s Claude and just as prone to “hallucinations” – the tendency, exhibited by all LLMs, to give false answers or to make up “facts” to fill gaps in its data. According to NewsGuard, a rating system for news and information websites, DeepSeek’s chatbot made false claims 30% of the time and gave no answers to 53% of questions, compared with 40% and 22% respectively for the 10 leading chatbots in NewsGuard’s most recent audit.

The figures expose the profound unreliability of all LLMs. DeepSeek’s particularly high non-response rate is likely to be the product of its censoriousness; it refuses to provide answers on any issue that China finds sensitive or about which it wants facts restricted, whether Tiananmen Square or Taiwan...

Nevertheless, for all the pushback, each time one fantasy prediction fails to materialise, another takes its place. Such claims derive less from technological possibilities than from political and economic needs. While AI technology has provided hugely important tools, capable of surpassing humans in specific fields, from the solving of mathematical problems to the recognition of disease patterns, the business model depends on hype. It is the hype that drives the billion-dollar investment and buys political influence, including a seat at the presidential inauguration."

Monday, January 27, 2025

Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines; WVU Today, January 22, 2025

 WVU Today; Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines

"Two West Virginia University researchers have designed a curriculum to engage liberal arts faculty in discussions on the social, ethical and technical aspects of artificial intelligence and its role in classrooms.

Through a grant from the National Endowment for the Humanities, Erin Brock Carlson, assistant professor of English, and Scott Davidson, professor of philosophy, both at the WVU Eberly College of Arts and Sciences, have designed an interdisciplinary, cross-institutional program to facilitate conversations among faculty about the benefits and drawbacks of AI, how it functions and the need for human interpretation.

The award will fund a summer workshop in which Carlson and Davidson will offer AI trainings for humanities faculty and guide them through creation and development of courses with an AI component. The researchers will then assist as faculty offer those courses to students, assess progress and help with the implementation of the projects that develop.

The researchers said they hope to challenge the notion that artificial intelligence research falls into the domain of STEM fields. 

“The humanities gets overlooked and underappreciated so often,” Carlson said. “We are doing important, meaningful research, just like our colleagues in STEM and other fields. This is a chance to use a humanities lens to examine contemporary problems and developments like artificial intelligence and also to get conversations going between fields that oftentimes don’t talk to one another as much as we should.”

Co-directors Carlson and Davidson will be joined by a team of mentors and fellows — two from data science fields and two from the humanities perspective — that will serve and assist as resources in the interdisciplinary conversations. The seminar and summer workshops will support the creation or redesign of 10 courses. They plan to invite off-campus experts to help facilitate the workshops, work with the faculty and support their projects.

“It’s really about expanding capacity at the University and in the humanities to investigate the implications of AI or to actually use AI in humanities courses, whether it’s for writing, creating art or creating projects through the use of AI,” Davidson said. “There are a lot of different possibilities and directions that we hope these courses take. If we have 10 of them, it’s really going to have a big impact on humanities education here at the University.”

Carlson and Davidson acknowledge that attitudes about AI tend to be either extremely optimistic or extremely skeptical but that the reality is somewhere in the middle.

“AI is such a simplistic term to describe a whole suite of different technologies and developments that folks are dealing with every day, whether they know it or not,” Carlson said, noting that discussions could focus on personal, social and economic impacts of AI use, as well as how it affects character and intellectual values. 

Davidson was inspired to focus on AI when he found an erroneous, AI-generated summary of one of his own articles.

“It was totally wrong,” he said. “I didn’t say those things, and it made me think about how somebody might look me up and find that summary of my article and get this false impression of me. That really highlighted that we need to build an understanding in students of the need to inquire deeper and to understand that you have to be able to evaluate AI’s accuracy and its reliability.”

Carlson and Davidson said the conversations need to consider AI’s drawbacks, as well. Using AI consumes large amounts of water and electricity resulting in greenhouse emissions. Data centers produce electronic waste that can contain mercury and lead. 

They also intend to follow legal cases and precedents surrounding the use of AI.

“That’s another aspect of AI and the ways that it represents people,” Carlson said. “Because it has a very real, material impact on people in communities. It’s not just a super computer in a room. It’s a network that has a bunch of different implications for a bunch of different people, ranging from jobs to familial relationships. That’s the value of the humanities — to ask these tough questions because it’s increasingly difficult to avoid all of it.”

Conversations, as they expand, will need to keep up with the pace of AI’s rapidly developing landscape.  

“There’s going to be a lot of people involved in this,” she said. “We put together an amazing team. We want it to be an open, honest and ethical conversation that brings in other folks and opens up further conversations across the College and the University at large.”"

Friday, January 17, 2025

Apple sidelines AI news summaries due to errors; Japan Today, January 17, 2025

 Japan Today; Apple sidelines AI news summaries due to errors

"Apple pushed out a software update on Thursday which disabled news headlines and summaries generated using artificial intelligence that were lambasted for getting facts wrong.

The move by the tech titan comes as it enhances its latest lineup of devices with "Apple Intelligence" in a market keen for assurance that the iPhone maker is a contender in the AI race.

Apple's decision to temporarily disable the recently launched AI feature comes after the BBC and other news organizations complained that users were getting mistake-riddled or outright wrong headlines or news summary alerts."

Thursday, January 9, 2025

Elon Musk says all human data for AI training ‘exhausted’; The Guardian, January 9, 2025

, The Guardian; Elon Musk says all human data for AI training ‘exhausted’

"However, Musk also warned that AI models’ habit of generating “hallucinations” – a term for inaccurate or nonsensical output – was a danger for the synthetic data process.

He said in the livestreamed interview with Mark Penn, the chair of the advertising group Stagwell, that hallucinations had made the process of using artificial material “challenging” because “how do you know if it … hallucinated the answer or it’s a real answer”.

Andrew Duncan, the director of foundational AI at the UK’s Alan Turing Institute, said Musk’s comment tallied with a recent academic paper estimating that publicly available data for AI models could run out as soon as 2026. He added that over-reliance on synthetic data risked “model collapse”, a term referring to the outputs of models deteriorating in quality...

High-quality data, and control over it, is one of the legal battlegrounds in the AI boom. OpenAI admitted last year it would be impossible to create tools such as ChatGPT without access to copyrighted material, while the creative industries and publishers are demanding compensation for use of their output in the model training process."

Thursday, December 26, 2024

How Hallucinatory A.I. Helps Science Dream Up Big Breakthroughs; The New York Times, December 23, 2024

, The New York Times; How Hallucinatory A.I. Helps Science Dream Up Big Breakthroughs

"In the universe of science, however, innovators are finding that A.I. hallucinations can be remarkably useful. The smart machines, it turns out, are dreaming up riots of unrealities that help scientists track cancer, design drugs, invent medical devices, uncover weather phenomena and even win the Nobel Prize.

“The public thinks it’s all bad,” said Amy McGovern, a computer scientist who directs a federal A.I. institute. “But it’s actually giving scientists new ideas. It’s giving them the chance to explore ideas they might not have thought about otherwise.”

The public image of science is coolly analytic. Less visibly, the early stages of discovery can teem with hunches and wild guesswork. “Anything goes” is how Paul Feyerabend, a philosopher of science, once characterized the free-for-all.

Now, A.I. hallucinations are reinvigorating the creative side of science. They speed the process by which scientists and inventors dream up new ideas and test them to see if reality concurs. It’s the scientific method — only supercharged. What once took years can now be done in days, hours and minutes. In some cases, the accelerated cycles of inquiry help scientists open new frontiers."

Tuesday, December 10, 2024

OpenAI makes AI video generator Sora publicly available in US; The Guardian, December 9, 2024

  , The Guardian; OpenAI makes AI video generator Sora publicly available in US

"Anyone in the US can now use OpenAI’s artificial intelligence video generator, Sora, which the company announced on Monday would become publicly available. OpenAI first presented Sora in February, but it was only accessible to select artists, film-makers and safety testers. At multiple points on Monday, though, OpenAI’s website did not allow for new sign-ups for Sora, citing heavy traffic...

While generative AI has improved considerably over the past year, it is still prone to hallucinations, or incorrect responses, and plagiarism. AI image generators also often produce unrealistic images, such as people with several arms or misplaced facial features.

Critics warn that this type of AI video technology could be misused by bad actors for disinformation, scams and deepfakes. There have already been deepfake videos of the Ukrainian president, Volodymyr Zelenskyy, supposedly calling for a ceasefire and of Kamala Harris supposedly describing herself as “the ultimate diversity hire”.

OpenAI said in its blogpost that it would initially limit uploads of specific people and that it will block content with nudity. The company said that it was additionally “blocking particularly damaging forms of abuse, such as child sexual abuse materials and sexual deepfakes”.

Sora will be available to users who already subscribe and pay for OpenAI’s tools. People in the US and “most countries internationally” will have access to the tool, but it will not be available in the UK or Europe."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

 John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Monday, August 19, 2024

New ABA Rules on AI and Ethics Shows the Technology Is 'New Wine in Old Bottles'; The Law Journal Editorial Board via Law.com, August 16, 2024

 The Law Journal Editorial Board via Law.com; New ABA Rules on AI and Ethics Shows the Technology Is 'New Wine in Old Bottles'

On July 29, the American Bar Association’s Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 on generative artificial intelligence tools. The opinion follows on such opinions and guidance from several state bar associations, as well as similar efforts by non-U.S. bars and regulatory bodies around the world...

Focused on GAI, the opinion addresses six core principles: competence, confidentiality, communication, meritorious claims and candor to tribunal, supervision and fees...

What is not commonly understood, perhaps, is that GAI “hallucinates,” and generates content...

Not addressed in the opinion is whether GAI is engaged in the practice of law...

At the ABA annual meeting, representatives of more than 20 “foreign” bars participated in a roundtable on GAI. In a world of cross-border practice, there was a desire for harmonization."