Showing posts with label AI hallucinations. Show all posts
Showing posts with label AI hallucinations. Show all posts

Friday, January 17, 2025

Apple sidelines AI news summaries due to errors; Japan Today, January 17, 2025

 Japan Today; Apple sidelines AI news summaries due to errors

"Apple pushed out a software update on Thursday which disabled news headlines and summaries generated using artificial intelligence that were lambasted for getting facts wrong.

The move by the tech titan comes as it enhances its latest lineup of devices with "Apple Intelligence" in a market keen for assurance that the iPhone maker is a contender in the AI race.

Apple's decision to temporarily disable the recently launched AI feature comes after the BBC and other news organizations complained that users were getting mistake-riddled or outright wrong headlines or news summary alerts."

Thursday, January 9, 2025

Elon Musk says all human data for AI training ‘exhausted’; The Guardian, January 9, 2025

, The Guardian; Elon Musk says all human data for AI training ‘exhausted’

"However, Musk also warned that AI models’ habit of generating “hallucinations” – a term for inaccurate or nonsensical output – was a danger for the synthetic data process.

He said in the livestreamed interview with Mark Penn, the chair of the advertising group Stagwell, that hallucinations had made the process of using artificial material “challenging” because “how do you know if it … hallucinated the answer or it’s a real answer”.

Andrew Duncan, the director of foundational AI at the UK’s Alan Turing Institute, said Musk’s comment tallied with a recent academic paper estimating that publicly available data for AI models could run out as soon as 2026. He added that over-reliance on synthetic data risked “model collapse”, a term referring to the outputs of models deteriorating in quality...

High-quality data, and control over it, is one of the legal battlegrounds in the AI boom. OpenAI admitted last year it would be impossible to create tools such as ChatGPT without access to copyrighted material, while the creative industries and publishers are demanding compensation for use of their output in the model training process."

Thursday, December 26, 2024

How Hallucinatory A.I. Helps Science Dream Up Big Breakthroughs; The New York Times, December 23, 2024

, The New York Times; How Hallucinatory A.I. Helps Science Dream Up Big Breakthroughs

"In the universe of science, however, innovators are finding that A.I. hallucinations can be remarkably useful. The smart machines, it turns out, are dreaming up riots of unrealities that help scientists track cancer, design drugs, invent medical devices, uncover weather phenomena and even win the Nobel Prize.

“The public thinks it’s all bad,” said Amy McGovern, a computer scientist who directs a federal A.I. institute. “But it’s actually giving scientists new ideas. It’s giving them the chance to explore ideas they might not have thought about otherwise.”

The public image of science is coolly analytic. Less visibly, the early stages of discovery can teem with hunches and wild guesswork. “Anything goes” is how Paul Feyerabend, a philosopher of science, once characterized the free-for-all.

Now, A.I. hallucinations are reinvigorating the creative side of science. They speed the process by which scientists and inventors dream up new ideas and test them to see if reality concurs. It’s the scientific method — only supercharged. What once took years can now be done in days, hours and minutes. In some cases, the accelerated cycles of inquiry help scientists open new frontiers."

Tuesday, December 10, 2024

OpenAI makes AI video generator Sora publicly available in US; The Guardian, December 9, 2024

  , The Guardian; OpenAI makes AI video generator Sora publicly available in US

"Anyone in the US can now use OpenAI’s artificial intelligence video generator, Sora, which the company announced on Monday would become publicly available. OpenAI first presented Sora in February, but it was only accessible to select artists, film-makers and safety testers. At multiple points on Monday, though, OpenAI’s website did not allow for new sign-ups for Sora, citing heavy traffic...

While generative AI has improved considerably over the past year, it is still prone to hallucinations, or incorrect responses, and plagiarism. AI image generators also often produce unrealistic images, such as people with several arms or misplaced facial features.

Critics warn that this type of AI video technology could be misused by bad actors for disinformation, scams and deepfakes. There have already been deepfake videos of the Ukrainian president, Volodymyr Zelenskyy, supposedly calling for a ceasefire and of Kamala Harris supposedly describing herself as “the ultimate diversity hire”.

OpenAI said in its blogpost that it would initially limit uploads of specific people and that it will block content with nudity. The company said that it was additionally “blocking particularly damaging forms of abuse, such as child sexual abuse materials and sexual deepfakes”.

Sora will be available to users who already subscribe and pay for OpenAI’s tools. People in the US and “most countries internationally” will have access to the tool, but it will not be available in the UK or Europe."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

 John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Monday, August 19, 2024

New ABA Rules on AI and Ethics Shows the Technology Is 'New Wine in Old Bottles'; The Law Journal Editorial Board via Law.com, August 16, 2024

 The Law Journal Editorial Board via Law.com; New ABA Rules on AI and Ethics Shows the Technology Is 'New Wine in Old Bottles'

On July 29, the American Bar Association’s Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 on generative artificial intelligence tools. The opinion follows on such opinions and guidance from several state bar associations, as well as similar efforts by non-U.S. bars and regulatory bodies around the world...

Focused on GAI, the opinion addresses six core principles: competence, confidentiality, communication, meritorious claims and candor to tribunal, supervision and fees...

What is not commonly understood, perhaps, is that GAI “hallucinates,” and generates content...

Not addressed in the opinion is whether GAI is engaged in the practice of law...

At the ABA annual meeting, representatives of more than 20 “foreign” bars participated in a roundtable on GAI. In a world of cross-border practice, there was a desire for harmonization."

Friday, June 21, 2024

Using AI to Create Content? Watch Out for Copyright Violations; Chicago Business Attorney Blog, June 20, 2024

 , Chicago Business Attorney Blog; Using AI to Create Content? Watch Out for Copyright Violations

"Businesses using generative AI programs like ChatGPT to create any content—whether for blogs, websites or other marketing materials, and whether text, visuals, sound or video—need to ensure that they’re not inadvertently using copyrighted materials in the process.

Clearly, the times they are a changing….and businesses need to adapt to the changes.  Employers should promulgate messages to their employees and contractors updating their policy manuals to ensure that communications professionals and others crafting content are aware of the risks of using AI-generated materials, which go beyond the possibility that they are “hallucinated” rather than factual—although that’s worth considering, too."

Friday, June 7, 2024

‘This Is Going to Be Painful’: How a Bold A.I. Device Flopped; The New York Times, June 6, 2024

 Tripp Mickle and , The New York Times ; This Is Going to Be Painful’: How a Bold A.I. Device Flopped

"As of early April, Humane had received around 10,000 orders for the Ai Pin, a small fraction of the 100,000 that it hoped to sell this year, two people familiar with its sales said. In recent months, the company has also grappled with employee departures and changed a return policy to address canceled orders. On Wednesday, it asked customers to stop using the Ai Pin charging case because of a fire risk associated with its battery.

Its setbacks are part of a pattern of stumbles across the world of generative A.I., as companies release unpolished products. Over the past two years, Google has introduced and pared back A.I. search abilities that recommended people eat rocks, Microsoft has trumpeted a Bing chatbot that hallucinated and Samsung has added A.I. features to a smartphone that were called “excellent at times and baffling at others.”"

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

 Varun Magesh∗ Stanford University; Mirac Suzgun, Stanford University; Faiz Surani∗ Stanford University; Christopher D. Manning, Stanford University; Matthew Dahl, Yale University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

Thursday, May 23, 2024

US intelligence agencies’ embrace of generative AI is at once wary and urgent; Associated Press, May 23, 2024

 FRANK BAJAK , Associated Press; US intelligence agencies’ embrace of generative AI is at once wary and urgent

"The CIA’s inaugural chief technology officer, Nand Mulchandani, thinks that because gen AI models “hallucinate” they are best treated as a “crazy, drunk friend” — capable of great insight and creativity but also bias-prone fibbers. There are also security and privacy issues: adversaries could steal and poison them, and they may contain sensitive personal data that officers aren’t authorized to see.

That’s not stopping the experimentation, though, which is mostly happening in secret. 

An exception: Thousands of analysts across the 18 U.S. intelligence agencies now use a CIA-developed gen AI called Osiris. It runs on unclassified and publicly or commercially available data — what’s known as open-source. It writes annotated summaries and its chatbot function lets analysts go deeper with queries...

Another worry: Ensuring the privacy of “U.S. persons” whose data may be embedded in a large-language model.

“If you speak to any researcher or developer that is training a large-language model, and ask them if it is possible to basically kind of delete one individual piece of information from an LLM and make it forget that -- and have a robust empirical guarantee of that forgetting -- that is not a thing that is possible,” John Beieler, AI lead at the Office of the Director of National Intelligence, said in an interview.

It’s one reason the intelligence community is not in “move-fast-and-break-things” mode on gen AI adoption."