Friday, June 14, 2024

Pope Francis is first pontiff to address G7 leaders with AI speech; Axios, June 14, 2024

"Pope Francis made history Friday as the first pontiff to speak at the Group of Seven meeting in Fasano, Italy, where he discussed his concerns with artificial intelligence.

Why it matters: The pope has long urged caution around AI, calling it "a fascinating tool and also a terrifying one," during his remarks Friday even as he acknowledged its potential applications in medicine, labor, culture, communications, education and politics. 

  • "The holy scriptures say that God gave to human beings his spirit in order for them to have wisdom, intelligence and knowledge in all kinds of tasks," he said. "Science and technology are therefore extraordinary products of the potential which is active in us human beings.""

OpenAI Asks to Merge New York Times, Daily News Copyright Suits; Bloomberg Law, June 14, 2024

Aruni Soni , Bloomberg Law; OpenAI Asks to Merge New York Times, Daily News Copyright Suits

"OpenAI Inc. asked a federal court to combine the “functionally identical” copyright lawsuits filed against it by The New York Times Co. and The Daily News LP alleging it used their content to create artificial intelligence services."

How Pope Francis became the AI ethicist for tech titans and world leaders; The Washington Post, June 14, 2024

  

, The Washington Post; How Pope Francis became the AI ethicist for tech titans and world leaders

"In the same way the pope sought to elevate the urgency of climate change, Francis now is zooming in on AI — a technology he has said poses “a risk to our survival” and could “endanger our common home.”

His concerns center less on sci-fi movie predictions of malevolent machines, or how the possibility of sentient AI might challenge the fundamental definition of life. Rather, he has focused on the apocalyptic potential of weaponized AI. He fears the elimination of human empathy as algorithms begin to decide the outcome of everything from asylum applications to bank loans. He has heralded “the exciting opportunities” of the technology, but frets over its power to deepen inequality within and among nations. Those risks, he insists, must be managed through a global treaty to regulate AI."

Thursday, June 13, 2024

Voice of America (VoA), June 13, 2024

 Matt Dibble, Voice of America (VoA); AI copyright fight turns to disclosing original content

"Artists and other creators say their works have been used to build the multibillion-dollar generative AI industry without any compensation for them. Matt Dibble reports on a proposed U.S. law that would force AI companies to reveal their sources."

Wednesday, June 12, 2024

Patently insufficient: a new intellectual property treaty does little to protect Māori traditional knowledge; The Conversation, June 9, 2024

 Senior Lecturer Above the Bar, University of CanterburySenior Lecturer in Management, Te Herenga Waka — Victoria University of WellingtonProfessor of Commercial Law, Te Herenga Waka — Victoria University of Wellington, The Conversation; ; Patently insufficient: a new intellectual property treaty does little to protect Māori traditional knowledge

"The problem of “biopiracy” – the misappropriation and patenting for profit of Indigenous knowledge – has been on the rise for some time. So a global treaty aimed at protecting traditional knowledge and genetic resources should be a welcome development.

In late May, the World Intellectual Property Organization (WIPO) adopted the Treaty on Intellectual Property, Genetic Resources and Traditional Knowledge. It is the first international agreement on intellectual property that includes provisions on Indigenous peoples’ knowledge. 

More than 20 years in the making, it represents the culmination of negotiations between the 193 WIPO member states since 2000. And on the face of it, the treaty appears to be an important intervention to prevent biopiracy. 

However, the new agreement is unlikely to lead to major changes to New Zealand law, or improve the rights of Māori to own or control their intellectual property and taonga (treasured possessions). Given the well-documented misappropriation of Māori knowledge and taonga, more substantive protections are still needed." 

Adobe Responds to AI Fears With Plans for Updated Legal Terms; Bloomberg Law, June 12, 2024

Cassandre Coyer and Aruni Soni, Bloomberg Law; Adobe Responds to AI Fears With Plans for Updated Legal Terms

"“As technology evolves, we have to evolve,” Dana Rao, Adobe’s general counsel, said in an interview with Bloomberg Law. “The legal terms have to evolve, too. And that’s really the lesson that we’re sort of internalizing here.”

Over the weekend, some Adobe customers revolted on social media, crying foul at updated terms of use they claimed allowed Adobe to seize their intellectual property and use their data to feed AI models. 

The Photoshop and Illustrator maker responded with multiple blog posts over several days seeking to reassure users it wasn’t stealing their content, including a pledge to quickly rewrite its user agreement in clearer language. Rao said Tuesday that Adobe will be issuing updated terms of use on June 18 in which it will specifically state the company doesn’t train its Firefly AI models on its cloud content.

The unexpected online storm around the updates is the latest example of how sweeping technological changes—such as the rise of generative AI—have bolstered users’ fears of copyright violations and privacy invasions. That sentiment is part of the landscape the tech industry must navigate to serve a creator community increasingly on edge.

What happened is “more of a lesson in terms of how to present terms of use and roll out updates in a way that can address or alleviate customer concerns, especially in the era of AI and increased concern over privacy,” said Los Angeles-based advertising attorney Robert Freund."

Big Tech Launches Campaign to Defend AI Use; The Hollywood Reporter, June 6, 2024

Winston Cho , The Hollywood Reporter; Big Tech Launches Campaign to Defend AI Use

"Chamber of Progress, a tech industry coalition whose members include Amazon, Apple and Meta, is launching a campaign to defend the legality of using copyrighted works to train artificial intelligence systems.

The group says the campaign, called “Generate and Create” and unveiled on Thursday, will aim to highlight “how artists use generative AI to enhance their creative output” and “showcase how AI lowers barriers for producing art” as part of an initiative to “defend the longstanding legal principle of fair use under copyright law.”"

EFF-Austin: Generative AI & Copyright; June 11, 2024

 EFF-Austin: Generative AI & Copyright

"Generative AI & Copyright

From the Organizers:

One of the key legal issues currently being debated around Generative AI is whether or not tools like ChatGPT and Midjourney are in violation of copyright; namely, whether the act of using existing artistic content as training data to produce new content is a derivative or transformational use of the training data. In this talk, Ed will walk us through both what current copyright law has to say about the legal status of AI art, as well as what he believes the legal framework around AI art should be going forward.

Our speaker this month is Ed Cavazos. Ed Cavazos is a technology attorney and the managing partner of Pillsbury Winthrop Shaw Pittman’s Austin office. His practice focuses on intellectual property and cutting-edge issues in the internet, e-commerce, video game, and software industries. He has been involved in a wide variety of high-profile legal matters, including assisting on the Steve Jackson Games lawsuit against the U.S. Secret Service while still a law student, successfully defending tech companies against patent and copyright trolls, and representing some of the earliest entrepreneurs developing NFT and Metaverse-related business models. His 1994 book, Cyberspace and the Law (MIT Press) was one of the first texts exploring internet legal issues and over the years he has written a number of law review articles exploring similar issues. Ed was one of the founders of EFF-Austin in the early 1990’s and has, since then, served in various officer, board and advisory board roles.

Talk will be livestreamed at https://www.youtube.com/user/austintechlive and will later be archived at https://www.youtube.com/user/effaustin. Questions for the speaker from virtual attendees may be submitted via the Youtube livestream chat, our Twitter account, @EFFaustin, or our Mastodon account, @effaustin."

AI Copyright Issues ‘on Shifting Sands’ but Legal Protections Are Coming, Experts Tell PGA Produced By Conference; The Wrap, June 9, 2024

 , The Wrap; AI Copyright Issues ‘on Shifting Sands’ but Legal Protections Are Coming, Experts Tell PGA Produced By Conference

"Renard T. Jenkins — a former Warner Bros. Discovery exec who’s now president and CEO of I2A2 Technologies, Labs and Studios — said his company is working to help create an infrastructure to help with authenticating content.

“Back in the old days, you had watermarks,” he said, noting that file-based content can be altered to remove information about the original creator. “What we are attempting to do is create an infrastructure and ecosystem that would allow us to track every single iteration of a piece of content from its origins all the way through the distribution.” 

For that to happen, the PGA and other organizations would have to agree to a new standard. “It’s a very heavy lift,” he said, comparing the necessary level of cooperation to a cross-mafia agreement, describing it as the “five families of Hollywood coming together.”

He also suggested that blockchain technology could be used to “audit and track” every change to a piece of content. It’s the same tech used for Bitcoin and the much-maligned NFT digital assets."

Saturday, June 8, 2024

NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI; The National Law Review, June 4, 2024

  James G. Gatto of Sheppard, Mullin, Richter & Hampton LLP, The National Law Review; NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI

"The number of bar associations that have issued AI ethics guidance continues to grow, with NJ being the most recent. In its May 2024 report (Report), the NJ Task Force on Artificial Intelligence and the Law made a number of recommendations and findings as detailed below. With this Report, NJ joins the list of other bar associations that have issued AI ethics guidance, including FloridaCaliforniaNew YorkDC as well as the US Patent and Trademark Office. The Report notes that the practice of law is “poised for substantial transformation due to AI,” adding that while the full extent of this transformation remains to be seen, attorneys must keep abreast of and adapt to evolving technological landscapes and embrace opportunities for innovation and specialization in emerging AI-related legal domains.

The Task Force included four workgroups, including: i) Artificial Intelligence and Social Justice Concerns; ii) Artificial Intelligence Products and Services; iii) Education and CLE Programming; and iv) Ethics and Regulatory Issues. Each workgroup made findings and recommendations, some of which are provided below (while trying to avoid duplicating what other bar associations have addressed). Additionally, the Report includes some practical tools including guidance on Essential Factors for Selecting AI Products and Formulating an AI Policy in Legal Firms, provides a Sample Artificial Intelligence and Generative Artificial Intelligence Use Policy and Questions for Vendors When Selecting AI Products and Services, links to which are provided below.

The Report covers many of the expected topics with a focus on:

  • prioritizing AI education, establishing baseline procedures and guidelines, and collaborating with data privacy, cybersecurity, and AI professionals as needed;
  • adopting an AI policy to ensure the responsible integration of AI in legal practice and adherence to ethical and legal standards; and
  • the importance of social justice concerns related to the use of AI, including the importance of transparency in AI software algorithms, bias mitigation, and equitable access to AI tools and the need to review legal AI tools for fairness and accessibility, particularly tools designed for individuals from marginalized or vulnerable communities.

Some of the findings and recommendations are set forth below."

You Can Create Award-Winning Art With AI. Can You Copyright It?; Bloomberg Law, June 5, 2024

Matthew S. Schwartz, Bloomberg Law; You Can Create Award-Winning Art With AI. Can You Copyright It?

"We delved into the controversy surrounding the use of copyrighted material in training AI systems in our first two episodes of this season. Now we shift our focus to the output. Who owns artwork created using artificial intelligence? Should our legal system redefine what constitutes authorship? Or, as AI promises to redefine how we create, will the government cling to historical notions of authorship?

Guests:

  • Jason M. Allen, founder of Art Incarnate
  • Sy Damle, partner in the copyright litigation group at Latham & Watkins
  • Shira Perlmutter, Register of Copyrights and director of the US Copyright Office"

Friday, June 7, 2024

Angry Instagram posts won’t stop Meta AI from using your content; Popular Science, June 5, 2024

Mack DeGeurin, Popular Science; Angry Instagram posts won’t stop Meta AI from using your content

"Meta, the Mark Zuckerberg-owned tech giant behind Instagram, surprised many of the app’s estimated 1.2 billion global users with a shock revelation last month. Images, including original artwork and other creative assets uploaded to the company’s platforms, are now being used to train the company’s AI image generator. That admission, initially made public by Meta executive Chris Cox during an interview with Bloomberg last month, has elicited a fierce backlash from some creators. As of writing, more than 130,000 Instagram users have reshared a message on Instagram telling the company they do not consent to it using their data to train Meta AI. Those pleas, however, are founded on a fundamental misunderstanding of creators’ relationship with extractive social media platforms. These creators already gave away their work, whether they realize it or not."

‘This Is Going to Be Painful’: How a Bold A.I. Device Flopped; The New York Times, June 6, 2024

 Tripp Mickle and , The New York Times ; This Is Going to Be Painful’: How a Bold A.I. Device Flopped

"As of early April, Humane had received around 10,000 orders for the Ai Pin, a small fraction of the 100,000 that it hoped to sell this year, two people familiar with its sales said. In recent months, the company has also grappled with employee departures and changed a return policy to address canceled orders. On Wednesday, it asked customers to stop using the Ai Pin charging case because of a fire risk associated with its battery.

Its setbacks are part of a pattern of stumbles across the world of generative A.I., as companies release unpolished products. Over the past two years, Google has introduced and pared back A.I. search abilities that recommended people eat rocks, Microsoft has trumpeted a Bing chatbot that hallucinated and Samsung has added A.I. features to a smartphone that were called “excellent at times and baffling at others.”"

Thursday, June 6, 2024

Pitt’s national patent ranking improved to No. 14 in 2023; PittWire, June 4, 2024

Mike Yeomans , PittWire; Pitt’s national patent ranking improved to No. 14 in 2023

"Innovators at the University of Pittsburgh received 114 U.S. patents in 2023, up from 105 the year before. Their efforts earned Pitt a No. 14 ranking for utility patents granted among universities nationally, according to a list the National Academy of Inventors (NAI) published in June. 

NAI has published the Top 100 Worldwide Universities list each year since 2013, and last year the organization added the U.S. ranking. Pitt is ranked No. 19 on the global list.

Pitt inventors can find support for bringing their ideas to life at every step of the process in the Office of Innovation and Entrpreneurship (OIE), which tracks patents issued each month and offers commercialization resources, sessions with experts in residence and funding opportunities to the University community...

The Top 100 U.S. Universities list is meant to provide a more focused view of the national innovation landscape and the contributions made by U.S. academic institutions. Pitt ranks just behind the University of Pennsylvania, which earned 119 patents, and ahead of Northwestern University (108 patents), Cornell University (94 patents) and Duke University (92 patents)."

Can you become a lawyer by listening to Taylor Swift?; BBC, June 3, 2024

 Jamie Russell, BBC; Can you become a lawyer by listening to Taylor Swift?

"While Taylor Swift was on her record-breaking Eras Tour, one Swiftie was using the star’s music in a bid to pass her law degree.

Regan Caie, a law student at the University of Glasgow, wrote her fourth-year dissertation about Swift's re-recorded albums and copyright law.

The music star has re-recorded and re-released four of her first six albums over a copyright dispute with producer and artist manager Scooter Braun.

Regan, 21, said her dissertation combined her love of Swift's music with her ambition to eventually specialise in copyright and intellectual property law."

Wednesday, June 5, 2024

Will A.I. Be a Creator or a Destroyer of Worlds?; The New York Times, June 5, 2024

 Thomas B. Edsall, The New York Times ; Will A.I. Be a Creator or a Destroyer of Worlds?

"The advent of A.I. — artificial intelligence — is spurring curiosity and fear. Will A.I. be a creator or a destroyer of worlds?"

Can You Trademark a Potato? Take Our Food-Branding Quiz.; The New York Times, June 4, 2024

 , The New York Times; Can You Trademark a Potato? Take Our Food-Branding Quiz.

America is saturated with food trademarks. The Cronut? Trademarked. Pop-Tarts? Trademarked. Even grapes that taste like cotton candy, and the mash-up of gai lan and broccoli called Broccolini are legally protected.

Yet the celebrity chef David Chang was widely criticized this spring for pressuring small manufacturers to stop using the term “chile crunch.” His business holds the trademark for the spicy condiment, but many people wondered: How can a name common to so many cuisines be owned by one company?

The answers to that question and many more lie in the byzantine deliberations of the U.S. Patent and Trademark Office, in Alexandria, Va. It has registered more than 200,000 food and agricultural trademarks, which means that the name or product — the brand, essentially — is unique enough that no one else is allowed to use it for a similar item. (Recipes can’t be trademarked, but some can be ruled trade secrets, like the formula for Dr Pepper or KFC’s 11 herbs and spices.)

The process of deciding what merits a trademark can be downright Talmudic, starting with the hierarchy of trademark types. The easiest to secure and protect are the completely made-up words that the office calls “fanciful,” like Häagen-Dazs. Next are “arbitrary" names — real words that have nothing to do with the products they identify, like Apple for computers. Harder-to-trademark categories include “suggestive” names, which contain a hint of what the product is, like SweeTarts, and plainly “descriptive” ones, like All-Bran.

There are precise legal requirements to meet, but also room for subjective interpretation. Public perception is the barometer. If a name seems confusing, misleading or too common, it won’t get a trademark. “Our job is to figure out what the American consumer is thinking,” said Amy Cotton, the deputy commissioner for trademark examination.

As a consumer, how good are you at gauging what deserves a trademark?"

OpenAI and Google DeepMind workers warn of AI industry risks in open letter; The Guardian, June 4, 2024

 , The Guardian; OpenAI and Google DeepMind workers warn of AI industry risks in open letter

"A group of current and former employees at prominent artificial intelligence companies issued an open letter on Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers.

The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees – one of whom previously worked at Anthropic."

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

 Varun Magesh∗ Stanford University; Mirac Suzgun, Stanford University; Faiz Surani∗ Stanford University; Christopher D. Manning, Stanford University; Matthew Dahl, Yale University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

GENERATIVE AI IS CREATING A COPYRIGHT CRISIS FOR ARTISTS; Mind Matters, June 3, 2024

Mind Matters; GENERATIVE AI IS CREATING A COPYRIGHT CRISIS FOR ARTISTS

"The problem, Crawford and Schultz say, is that copyright law, as currently framed, does not really protect individuals under these circumstances. That’s not surprising. Copyright dates back to at least 1710 and the issues were very different then.

For one thing, as Jonathan Bartlett pointed out last December, when the New York Times launched a lawsuit for copyright violation against Microsoft and OpenAI, everyone accepted that big search engines have always violated copyright. But if they brought people to your site, while saving and using your content for themselves, you were getting something out of it at least.

But it’s different with generative AI and the chatbot. They use and replace your content. Users are not going back to you for more. OpenAI freely admits that it violates copyright but relies on loopholes to get around legal responsibility.

As the lawsuits pile up, it’s clear that gen AI and chatbots can’t work without these billions of images and texts. So we either do without them or we find a way to compensate the producers."

Adobe gets called out for violating its own AI ethics; Digital Trends, June 3, 2024

  , Digital Trends; Adobe gets called out for violating its own AI ethics

"Last Friday, the estate of famed 20th century American photographer Ansel Adams took to Threads to publicly shame Adobe for allegedly offering AI-genearated art “inspired by” Adams’ catalog of work, stating that the company is “officially on our last nerve with this behavior.”...

Adobe has since removed the offending images, conceding in the Threads conversation that, “this goes against our Generative AI content policy.”

However, the Adams estate seemed unsatisfied with that response, claiming that it had been “in touch directly” with the company “multiple times” since last August. “Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community,” the estate continued, “we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists’ estates to continuously police our IP on your platform, on your terms.”"

AI isn't useless. But is it worth it?; [citation needed], April 17, 2024

 Molly White , [citation needed]; AI isn't useless. But is it worth it?

"When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial. And while I do think that AI tools are more broadly useful than blockchains, they also come with similarly monstrous costs."

Google’s A.I. Search Leaves Publishers Scrambling; The New York Times, June 1, 2024

 Nico Grant and , The New York Times; Google’s A.I. Search Leaves Publishers Scrambling

"In May, Google announced that the A.I.-generated summaries, which compile content from news sites and blogs on the topic being searched, would be made available to everyone in the United States. And that change has Mr. Pine and many other publishing executives worried that the paragraphs pose a big danger to their brittle business model, by sharply reducing the amount of traffic to their sites from Google.

“It potentially chokes off the original creators of the content,” Mr. Pine said. The feature, AI Overviews, felt like another step toward generative A.I. replacing “the publications that they have cannibalized,” he added."

How news coverage, often uncritical, helps build up the AI hype; Reuters Institute, May 20, 2024

 Prof. Rasmus Kleis Nielsen , Reuters Institute; How news coverage, often uncritical, helps build up the AI hype

"“I would put media reporting [about AI] at around two out of 10,” David Reid, professor of Artificial Intelligence at Liverpool Hope University, said to the BBC earlier this year. “When the media talks about AI, they think of it as a single entity. It is not. What I would like to see is more nuanced reporting.”

While some individual journalists and outlets are highly respected for their reporting on AI, overall, social science research on news media coverage of artificial intelligence provides some support for Reid’s assessment.

Some working in the technology industry may feel very put upon – a few years ago Zachary Lipton, then an assistant professor at the machine learning department at Carnegie Mellon University, was quoted in the Guardian calling media coverage of artificial intelligence “sensationalised crap” and likening it to an “AI misinformation epidemic”. In private conversations, many computer scientists and technologists working in the private sector echo his complaints, decrying what several describe as relentlessly negative coverage obsessed with “killer robots.”"