Friday, October 4, 2024

Beyond the hype: Key components of an effective AI policy; CIO, October 2, 2024

  Leo Rajapakse, CIO; Beyond the hype: Key components of an effective AI policy

"An AI policy is a living document 

Crafting an AI policy for your company is increasingly important due to the rapid growth and impact of AI technologies. By prioritizing ethical considerations, data governance, transparency and compliance, companies can harness the transformative potential of AI while mitigating risks and building trust with stakeholders. Remember, an effective AI policy is a living document that evolves with technological advancements and societal expectations. By investing in responsible AI practices today, businesses can pave the way for a sustainable and ethical future tomorrow."

Ethical uses of generative AI in the practice of law; Reuters, October 3, 2024

  Thomson Reuters; Ethical uses of generative AI in the practice of law

"In the rapidly evolving landscape of legal technology, the integration of generative AI tools presents both unprecedented opportunities and significant ethical challenges. Ryan Groff, a distinguished member of the Massachusetts Bar and a lecturer at New England Law, explores these dimensions in his enlightening webinar, “Ethical Uses of Generative AI in the Practice of Law.” 

In the webinar, Ryan Groff discusses the ethical implications of using generative AI (GenAI) in legal practices, tracing the history of GenAI applications in law and distinguishing between various AI tools available today.  He provides an insightful overview of the historical application of GenAI in legal contexts and differentiates the various AI tools currently available. Groff emphasizes that while AI can enhance the efficiency of legal practices, it should not undermine the critical judgment of lawyers. He underscores the importance of maintaining rigorous supervision, safeguarding client confidentiality, and ensuring technological proficiency."

Thursday, October 3, 2024

Gilead Agrees to Allow Generic Version of Groundbreaking H.I.V. Shot in Poor Countries; The New York Times, October 2, 2024

 , The New York Times; Gilead Agrees to Allow Generic Version of Groundbreaking H.I.V. Shot in Poor Countries

"The drugmaker Gilead Sciences on Wednesday announced a plan to allow six generic pharmaceutical companies in Asia and North Africa to make and sell at a lower price its groundbreaking drug lenacapavir, a twice-yearly injection that provides near-total protection from infection with H.I.V.

Those companies will be permitted to sell the drug in 120 countries, including all the countries with the highest rates of H.I.V., which are in sub-Saharan Africa. Gilead will not charge the generic drugmakers for the licenses.

Gilead says the deal, made just weeks after clinical trial results showed how well the drug works, will provide rapid and broad access to a medication that has the potential to end the decades-long H.I.V. pandemic.

But the deal leaves out most middle- and high-income countries — including Brazil, Colombia, Mexico, China and Russia — that together account for about 20 percent of new H.I.V. infections. Gilead will sell its version of the drug in those countries at higher prices. The omission reflects a widening gulf in health care access that is increasingly isolating the people in the middle."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

 John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."