Showing posts with label lawyers. Show all posts
Showing posts with label lawyers. Show all posts

Tuesday, June 10, 2025

Global AI: Compression, Complexity, and the Call for Rigorous Oversight; ABA SciTech Lawyer, May 9, 2025

Joan Rose Marie Bullock, ABA SciTech Lawyer; Global AI: Compression, Complexity, and the Call for Rigorous Oversight

"Equally critical is resisting haste. The push to deploy AI, whether in threat detection or data processing, often outpaces scrutiny. Rushed implementations, like untested algorithms in critical systems, can backfire, as any cybersecurity professional can attest from post-incident analyses. The maxim of “measure twice, cut once” applies here: thorough vetting trumps speed. Lawyers, trained in precedent, recognize the cost of acting without foresight; technologists, steeped in iterative testing, understand the value of validation. Prioritizing diligence over being first mitigates catastrophic failures of privacy breaches or security lapses that ripple worldwide."

Tuesday, June 3, 2025

Emerging Issues in the Use of Generative AI: Ethics, Sanctions, and Beyond; The Federalist Society, June 3, 2025 12 PM EDT

 The Federalist Society; Emerging Issues in the Use of Generative AI: Ethics, Sanctions, and Beyond

"The idea of Artificial Intelligence has long presented potential challenges in the legal realm, and as AI tools become more broadly available and widely used, those potential hurdles are becoming ever more salient for lawyers in their day-to-day operations. Questions abound, from what potential risks of bias and error may exist in using an AI tool, to the challenges related to professional responsibility as traditionally understood, to the risks large language learning models pose to client confidentiality. Some contend that AI is a must-use, as it opens the door to faster, more efficient legal research that could equip lawyers to serve their clients more effectively. Others reject the use of AI, arguing that the risks of use and the work required to check the output it gives exceed its potential benefit.

Join us for a FedSoc Forum exploring the ethical and legal implications of artificial intelligence in the practice of law.

Featuring: 

  • Laurin H. Mills, Member, Werther & Mills, LLC
  • Philip A. Sechler, Senior Counsel, Alliance Defending Freedom
  • Prof. Eugene Volokh, Gary T. Schwartz Distinguished Professor of Law Emeritus, UCLA School of Law; Thomas M. Siebel Senior Fellow, Hoover Institution, Stanford University
  • (Moderator) Hon. Brantley Starr, District Judge, United States District Court for the Northern District of Texas"

Monday, June 2, 2025

Excruciating reason Utah lawyer presented FAKE case in court after idiotic blunder; Daily Mail, May 31, 2025

 JOE HUTCHISON FOR DAILYMAIL.COMExcruciating reason Utah lawyer presented FAKE case in court after idiotic blunder

"The case referenced, according to documents, was 'Royer v. Nelson' which did not exist in any legal database and was found to be made up by ChatGPT.

Opposing counsel said that the only way they would find any mention of the case was by using the AI

They even went as far as to ask the AI if the case was real, noting in a filing that it then apologized and said it was a mistake.

Bednar's attorney, Matthew Barneck, said that the research was done by a clerk and Bednar took all responsibility for failing to review the cases.

He told The Salt Lake Tribune: 'That was his mistake. He owned up to it and authorized me to say that and fell on the sword."

Saturday, May 17, 2025

Anthropic’s law firm throws Claude under the bus over citation errors in court filing; The Register, May 15, 2025

 Thomas Claburn, The Register; Anthropic’s law firm throws Claude under the bus over citation errors in court filing

"An attorney defending AI firm Anthropic in a copyright case brought by music publishers apologized to the court on Thursday for citation errors that slipped into a filing after using the biz's own AI tool, Claude, to format references.

The incident reinforces what's becoming a pattern in legal tech: while AI models can be fine-tuned, people keep failing to verify the chatbot's output, despite the consequences.

The flawed citations, or "hallucinations," appeared in an April 30, 2025 declaration [PDF] from Anthropic data scientist Olivia Chen in a copyright lawsuit music publishers filed in October 2023.

But Chen was not responsible for introducing the errors, which appeared in footnotes 2 and 3.

Ivana Dukanovic, an attorney with Latham & Watkins, the firm defending Anthropic, stated that after a colleague located a supporting source for Chen's testimony via Google search, she used Anthropic's Claude model to generate a formatted legal citation. Chen and defense lawyers failed to catch the errors in subsequent proofreading.

"After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article," explained Dukanovic in her May 15, 2025 declaration [PDF].

"Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors.

"Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai."...

The hallucinations of AI models keep showing up in court filings.

Last week, in a plaintiff's claim against insurance firm State Farm (Jacquelyn Jackie Lacey v. State Farm General Insurance Company et al), former Judge Michael R. Wilner, the Special Master appointed to handle the dispute, sanctioned [PDF] the plaintiff's attorneys for misleading him with AI-generated text. He directed the plaintiff's legal team to pay more than $30,000 in court costs that they wouldn't have otherwise had to bear.

After reviewing a supplemental brief filed by the plaintiffs, Wilner found that "approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way."

Two of the citations, he said, do not exist, and several cited phony judicial opinions."

Wednesday, May 7, 2025

ASU Law launches Wolin Family Center for Intellectual Property Law; ASU News, May 6, 2025

Kourtney Kelley, ASU News; ASU Law launches Wolin Family Center for Intellectual Property Law

"The Sandra Day O’Connor College of Law at Arizona State University has launched the Wolin Family Center for Intellectual Property Law — a groundbreaking initiative designed to shape the future of intellectual property law through legal education and industry collaboration.

With a robust curriculum, hands-on experiential learning and strategic partnerships, the Wolin Center will serve as a national leader in preparing the next generation of attorneys to navigate the rapidly evolving IP landscape. 

The center will focus on core areas of IP law — including patents, trademarks, copyrights and trade secrets — while also tackling emerging opportunities such as AI-generated content, digital copyright and data privacy...

The center is named for Harry and Tracy Wolin, who met in the Phoenix area while working in Motorola’s intellectual property department. Harry retired in 2024 from AMD, a Silicon Valley-based semiconductor company, where he was senior vice president, general counsel and corporate secretary for more than 20 years. Prior to becoming general counsel in 2003, Harry was vice president of intellectual property. He is an alumnus of ASU Law, having graduated with his JD in 1988."

AI of dead Arizona road rage victim addresses killer in court; The Guardian, May 6, 2025

 Cy Neff, The Guardian; AI of dead Arizona road rage victim addresses killer in court

"Pelkey’s appearance from beyond the grave was made possible by artificial intelligence in what could be the first use of AI to deliver a victim impact statement. Stacey Wales, Pelkey’s sister, told local outlet ABC-15 that she had a recurring thought when gathering more than 40 impact statements from Chris’s family and friends.

“All I kept coming back to was, what would Chris say?” Wales said.

As AI spreads across society and enters the courtroom, the US judicial conference advisory committee has announced that it will begin seeking public comment as part of determining how to regulate the use of AI-generated evidence at trial."

Saturday, November 16, 2024

Tracking The Slow Movement Of AI Copyright Cases; Law360, November 7, 2024

Mark Davies and Anna Naydonov , Law360; Tracking The Slow Movement Of AI Copyright Cases

"There is a considerable gap between assumptions in the technology community and assumptions in the legal community concerning how long the legal questions around artificial intelligence and copyright law will take to reach resolution.

The principal litigated question asks whether copyright law permits or forbids the process by which AI systems are using copyright works to generate additional works.[1] AI technologists expect that the U.S. Supreme Court will resolve these questions in a few years.[2] Lawyers expect it to take much longer.[3] History teaches the answer...

Mark S. Davies and Anna B. Naydonov are partners at White & Case LLP.

Mark Davies represented Stephen Thaler in Thaler v. Vidal, Oracle in Google v. Oracle, and filed an amicus brief on behalf of a design professional in Apple v. Samsung."

Friday, October 4, 2024

Ethical uses of generative AI in the practice of law; Reuters, October 3, 2024

  Thomson Reuters; Ethical uses of generative AI in the practice of law

"In the rapidly evolving landscape of legal technology, the integration of generative AI tools presents both unprecedented opportunities and significant ethical challenges. Ryan Groff, a distinguished member of the Massachusetts Bar and a lecturer at New England Law, explores these dimensions in his enlightening webinar, “Ethical Uses of Generative AI in the Practice of Law.” 

In the webinar, Ryan Groff discusses the ethical implications of using generative AI (GenAI) in legal practices, tracing the history of GenAI applications in law and distinguishing between various AI tools available today.  He provides an insightful overview of the historical application of GenAI in legal contexts and differentiates the various AI tools currently available. Groff emphasizes that while AI can enhance the efficiency of legal practices, it should not undermine the critical judgment of lawyers. He underscores the importance of maintaining rigorous supervision, safeguarding client confidentiality, and ensuring technological proficiency."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

 John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Monday, September 23, 2024

Generative AI and Legal Ethics; JD Supra, September 20, 2024

 Craig BrodskyGoodell, DeVries, Leech & Dann, LLP, JD Supra; Generative AI and Legal Ethics

 "In his scathing opinion, Cullen joined judges from New York Massachusetts and North Carolina, among others, by concluding that improper use of AI generated authorities may give rise to sanctions and disciplinary charges...

As a result, on July 29, 2024, the American Bar Association Standing Committee on Ethics and Professional issued Formal Opinion 512 on Generative Artificial Intelligence Tools. The ABA Standing Committee issued the opinion primarily because GAI tools are a “rapidly moving target” that can create significant ethical issues. The committee believed it necessary to offer “general guidance for lawyers attempting to navigate this emerging landscape.”

The committee’s general guidance is helpful, but the general nature of Opinion 512 it underscores part of my main concern — GAI has a wide-ranging impact on how lawyers practice that will increase over time. Unsurprisingly, at present, GAI implicates at least eight ethical rules ranging from competence (Md. Rule 19-301.1) to communication (Md. Rule 19-301.4), to fees (Md. Rule 19-301.5), to confidentiality, (Md. Rule 19-301.6), to supervisory obligations (Md. Rule 19-305.1 and Md. Rule 305.3) to the duties of a lawyer before tribunal to be candid and pursue meritorious claims and defenses. (Md. Rules 19-303.1 and 19-303.3).

As a technological feature of practice, lawyers cannot simply ignore GAI. The duty of competence under Rule 19-301.1 includes technical competence, and GAI is just another step forward. It is here to stay. We must embrace it but use it smartly.

Let it be an adjunct to your practice rather than having Chat GPT write your brief. Ensure that your staff understands that GAI can be helpful, but that the work product must be checked for accuracy.

After considering the ethical implications and putting the right processes in place, implement GAI and use it to your clients’ advantage."

Sunday, August 18, 2024

UC Berkeley Law School To Offer Advanced Law Degree Focused On AI; Forbes, August 16, 2024

 Michael T. Nietzel, Forbes; UC Berkeley Law School To Offer Advanced Law Degree Focused On AI

"The University of California, Berkeley School of Law has announced that it will offer what it’s calling “the first-ever law degree with a focus on artificial intelligence (AI).” The new AI-focused Master of Laws (LL.M.) program is scheduled to launch in summer 2025.

The program, which will award an AI Law and Regulation certificate for students enrolled in UC Berkeley Law’s LL.M. executive track, is designed for working professionals and can be completed over two summers or through remote study combined with one summer on campus...

According to Assistant Law Dean Adam Sterling, the curriculum will cover topics such as AI ethics, the fundamentals of AI technology, and current and future efforts to regulate AI. “This program will equip participants with in-depth knowledge of the ethical, regulatory, and policy challenges posed by AI,” Sterling added. “It will focus on building practice skills to help them advise and represent leading law firms, AI companies, governments, and non-profit organizations.”"

Saturday, June 8, 2024

NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI; The National Law Review, June 4, 2024

  James G. Gatto of Sheppard, Mullin, Richter & Hampton LLP, The National Law Review; NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI

"The number of bar associations that have issued AI ethics guidance continues to grow, with NJ being the most recent. In its May 2024 report (Report), the NJ Task Force on Artificial Intelligence and the Law made a number of recommendations and findings as detailed below. With this Report, NJ joins the list of other bar associations that have issued AI ethics guidance, including FloridaCaliforniaNew YorkDC as well as the US Patent and Trademark Office. The Report notes that the practice of law is “poised for substantial transformation due to AI,” adding that while the full extent of this transformation remains to be seen, attorneys must keep abreast of and adapt to evolving technological landscapes and embrace opportunities for innovation and specialization in emerging AI-related legal domains.

The Task Force included four workgroups, including: i) Artificial Intelligence and Social Justice Concerns; ii) Artificial Intelligence Products and Services; iii) Education and CLE Programming; and iv) Ethics and Regulatory Issues. Each workgroup made findings and recommendations, some of which are provided below (while trying to avoid duplicating what other bar associations have addressed). Additionally, the Report includes some practical tools including guidance on Essential Factors for Selecting AI Products and Formulating an AI Policy in Legal Firms, provides a Sample Artificial Intelligence and Generative Artificial Intelligence Use Policy and Questions for Vendors When Selecting AI Products and Services, links to which are provided below.

The Report covers many of the expected topics with a focus on:

  • prioritizing AI education, establishing baseline procedures and guidelines, and collaborating with data privacy, cybersecurity, and AI professionals as needed;
  • adopting an AI policy to ensure the responsible integration of AI in legal practice and adherence to ethical and legal standards; and
  • the importance of social justice concerns related to the use of AI, including the importance of transparency in AI software algorithms, bias mitigation, and equitable access to AI tools and the need to review legal AI tools for fairness and accessibility, particularly tools designed for individuals from marginalized or vulnerable communities.

Some of the findings and recommendations are set forth below."

Thursday, June 6, 2024

Can you become a lawyer by listening to Taylor Swift?; BBC, June 3, 2024

 Jamie Russell, BBC; Can you become a lawyer by listening to Taylor Swift?

"While Taylor Swift was on her record-breaking Eras Tour, one Swiftie was using the star’s music in a bid to pass her law degree.

Regan Caie, a law student at the University of Glasgow, wrote her fourth-year dissertation about Swift's re-recorded albums and copyright law.

The music star has re-recorded and re-released four of her first six albums over a copyright dispute with producer and artist manager Scooter Braun.

Regan, 21, said her dissertation combined her love of Swift's music with her ambition to eventually specialise in copyright and intellectual property law."

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

 Varun Magesh∗ Stanford University; Mirac Suzgun, Stanford University; Faiz Surani∗ Stanford University; Christopher D. Manning, Stanford University; Matthew Dahl, Yale University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

Tuesday, January 30, 2024

Florida’s New Advisory Ethics Opinion on Generative AI Hits the Mark; JDSupra, January 29, 2024

 Ralph Artigliere , JDSupra; Florida’s New Advisory Ethics Opinion on Generative AI Hits the Mark

"As a former Florida trial lawyer and judge who appreciates emerging technology, I admit that I had more than a little concern when The Florida Bar announced it was working on a new ethics opinion on generative AI. Generative AI promises to provide monumental advantages to lawyers in their workflow, quality of work product, productivity, and time management and more. For clients, use of generative AI by their lawyers can mean better legal services delivered faster and with greater economy. In the area of eDiscovery, generative AI promises to surpass technology assisted review in helping manage the increasingly massive amounts of data.

Generative AI is new to the greater world, and certainly to busy lawyers who are not reading every blogpost on AI. The internet and journals are afire over concerns of hallucinations, confidentiality, bias, and the like. I felt a new ethics opinion might throw a wet blanket on generative AI and discourage Florida lawyers from investigating the new technology.

Thankfully, my concerns did not become reality. The Florida Bar took a thorough look at the technology and the existing ethical guidance and law and applied existing guidelines and rules in a thorough and balanced fashion. This article briefly summarizes Opinion 24-1 and highlights some of its important features.

The Opinion

On January 19, 2024, The Florida Bar released Ethics Opinion 24-1(“Opinion 24-1”)regarding the use of generative artificial intelligence (“AI”) in the practice of law. The Florida Bar and the State Bar of California are leaders in issuing ethical guidance on this issue. Opinion 24-1 draws from a solid background of ethics opinions and guidance in Florida and around the country and provides positive as well as cautionary statements regarding the emerging technologies. Overall, the guidance is well-placed and helpful for lawyers at a time when so many are weighing the use of generative AI technology in their law practices."

Lawyers weigh strength of copyright suit filed against BigLaw firm; Rhode Island Lawyers Weekly, January 29, 2024

Pat Murphy , Rhode Island Lawyers Weekly; Lawyers weigh strength of copyright suit filed against BigLaw firm

"Jerry Cohen, a Boston attorney who teaches IP law at Roger Williams University School of Law, called the suit “not so much a copyright case as it is a matter of professional responsibility and respect.”"

Tuesday, December 26, 2023

Big law firms gobble up intellectual property firms in consolidation push; Reuters, December 12, 2023

 , Reuters; Big law firms gobble up intellectual property firms in consolidation push

"At least half a dozen firms that specialize in intellectual property law and litigation have been absorbed by larger U.S. law firms this year, as bigger firms seek to expand their IP benches and the niche firms face pressure to match their rivals' resources."

Wednesday, August 19, 2020

A New Copyright Office Warehouse–25 Years in the Making; Library of Congress, August 19, 2020

, Library of CongressA New Copyright Office Warehouse–25 Years in the Making

"The following is a guest post by Paul Capel, Supervisory Records Management Section Head.

The United States Copyright Office holds the most comprehensive collection of copyright records in the world. The Office has over 200,000 boxes of deposit copies spread among three storage facilities in Landover, Maryland; a contracted space in Pennsylvania; and the National Archives and Records Administration (NARA) facility in Massachusetts. Even with these three warehouses, that’s not enough space. Each day, the Office receives new deposits, and despite the increase in electronic deposits, our physical deposits continue to grow year after year.

These deposits are managed by the Deposit Copies Storage Unit, a dedicated team that springs into action to retrieve deposits when requested by examiners or researchers or for litigation cases. In this type of work, speed and efficiency of retrieval are critical. Managing deposits across three storage locations can present a challenge to our ideal retrieval times. When our records are stored in several locations, the potential for miscommunication or misplaced deposits increases.

This October, the Office will be opening a new 40,000 square foot warehouse that has been in discussion for over twenty-five years. We will be moving our deposits out of facilities that are more than forty years old to centrally locate them in a new state-of-the-art facility. This is a huge undertaking, and we are aiming to move 88,000 boxes from Landover in under 45 days. The new space is environmentally controlled and meets preservation requirements for the storage of federal records. Even more importantly, the new facility will allow the Office to maintain control over all our records in a single location, which will improve our retrieval times and will enable us to serve our stakeholders better.
This new facility is a great start, but we have an even bigger vision for our deposits. To truly inventory and track our deposits, the Office is investigating a warehouse management system that will help staff inventory, track, locate, and manage all the items in our warehouse. This type of system will help the Office enhance the availability and accessibility of materials, decreasing manual processing, and allowing for real-time tracking of deposits at any given time. It will also let us know who has them and when their period of retention ends.
This system will provide all the notifications  expected from any modern delivery service. Copyright Office staff will be able to obtain a copy of their order and tell when it is in transit, know when it has been delivered, and sign for it digitally. This system will also provide transparency to others who might have an interest in requesting the same deposit, to see where it currently is, who has it, and how long they have had it."

Tuesday, April 14, 2020

The Open COVID Pledge: What Is It and Is It Right for You?; The National Law Review, April 14, 2020

Theresa Rakocy, The National Law Review; The Open COVID Pledge: What Is It and Is It Right for You?

"Enter one possible solution: The Open COVID Pledge. A group of scientists, lawyers, and entrepreneurs developed the Open COVID Pledge to encourage businesses and research facilities to make their intellectual property available for use in the fight against COVID-19. The idea behind the Open COVID Pledge is to allow open sharing of intellectual property and technology to end the pandemic without the need for timely and costly licenses or royalty agreements. The initiative comes at a time when researchers and companies alike are surging ahead with ways to combat and end COVID-19. In its Press Release, the individuals behind the Open COVID Pledge explain that the license is needed because “enabling individuals and organizations across the world to work on solutions together, without impediments, is the quickest way to end this pandemic.”...

As COVID-19 continues to spread worldwide, with the number of new cases each day still increasing in most countries, research and the development of new technologies to combat and eradicate COVID-19 has blossomed. As discussed in an earlier post, countries and companies are looking for ways to contribute, with many now making available and expanding access to their intellectual property. The balance between access and protection of intellectual property, however, is delicate."

Wednesday, March 25, 2020

Celebrating the Women Leading the Copyright Office; Library of Congress: Copyright Creativity At Work Blog, March 25, 2020

; Celebrating the Women Leading the Copyright Office

"To celebrate Women’s History Month, I wanted to write about the five women who have served (and are serving) as leaders of the U.S. Copyright Office. Women have led this Office consecutively since November 1993, and their accomplishments are nothing short of incredible. These five lawyers (who all attended either Columbia Law School or George Washington Law) have contributed over 100 years of public service to the Copyright Office, counting all their roles. Here is just a snapshot of their accomplishments and contributions to copyright."