Wednesday, June 12, 2024

Adobe Responds to AI Fears With Plans for Updated Legal Terms; Bloomberg Law, June 12, 2024

Cassandre Coyer and Aruni Soni, Bloomberg Law; Adobe Responds to AI Fears With Plans for Updated Legal Terms

"“As technology evolves, we have to evolve,” Dana Rao, Adobe’s general counsel, said in an interview with Bloomberg Law. “The legal terms have to evolve, too. And that’s really the lesson that we’re sort of internalizing here.”

Over the weekend, some Adobe customers revolted on social media, crying foul at updated terms of use they claimed allowed Adobe to seize their intellectual property and use their data to feed AI models. 

The Photoshop and Illustrator maker responded with multiple blog posts over several days seeking to reassure users it wasn’t stealing their content, including a pledge to quickly rewrite its user agreement in clearer language. Rao said Tuesday that Adobe will be issuing updated terms of use on June 18 in which it will specifically state the company doesn’t train its Firefly AI models on its cloud content.

The unexpected online storm around the updates is the latest example of how sweeping technological changes—such as the rise of generative AI—have bolstered users’ fears of copyright violations and privacy invasions. That sentiment is part of the landscape the tech industry must navigate to serve a creator community increasingly on edge.

What happened is “more of a lesson in terms of how to present terms of use and roll out updates in a way that can address or alleviate customer concerns, especially in the era of AI and increased concern over privacy,” said Los Angeles-based advertising attorney Robert Freund."

Big Tech Launches Campaign to Defend AI Use; The Hollywood Reporter, June 6, 2024

Winston Cho , The Hollywood Reporter; Big Tech Launches Campaign to Defend AI Use

"Chamber of Progress, a tech industry coalition whose members include Amazon, Apple and Meta, is launching a campaign to defend the legality of using copyrighted works to train artificial intelligence systems.

The group says the campaign, called “Generate and Create” and unveiled on Thursday, will aim to highlight “how artists use generative AI to enhance their creative output” and “showcase how AI lowers barriers for producing art” as part of an initiative to “defend the longstanding legal principle of fair use under copyright law.”"

EFF-Austin: Generative AI & Copyright; June 11, 2024

 EFF-Austin: Generative AI & Copyright

"Generative AI & Copyright

From the Organizers:

One of the key legal issues currently being debated around Generative AI is whether or not tools like ChatGPT and Midjourney are in violation of copyright; namely, whether the act of using existing artistic content as training data to produce new content is a derivative or transformational use of the training data. In this talk, Ed will walk us through both what current copyright law has to say about the legal status of AI art, as well as what he believes the legal framework around AI art should be going forward.

Our speaker this month is Ed Cavazos. Ed Cavazos is a technology attorney and the managing partner of Pillsbury Winthrop Shaw Pittman’s Austin office. His practice focuses on intellectual property and cutting-edge issues in the internet, e-commerce, video game, and software industries. He has been involved in a wide variety of high-profile legal matters, including assisting on the Steve Jackson Games lawsuit against the U.S. Secret Service while still a law student, successfully defending tech companies against patent and copyright trolls, and representing some of the earliest entrepreneurs developing NFT and Metaverse-related business models. His 1994 book, Cyberspace and the Law (MIT Press) was one of the first texts exploring internet legal issues and over the years he has written a number of law review articles exploring similar issues. Ed was one of the founders of EFF-Austin in the early 1990’s and has, since then, served in various officer, board and advisory board roles.

Talk will be livestreamed at https://www.youtube.com/user/austintechlive and will later be archived at https://www.youtube.com/user/effaustin. Questions for the speaker from virtual attendees may be submitted via the Youtube livestream chat, our Twitter account, @EFFaustin, or our Mastodon account, @effaustin."

AI Copyright Issues ‘on Shifting Sands’ but Legal Protections Are Coming, Experts Tell PGA Produced By Conference; The Wrap, June 9, 2024

 , The Wrap; AI Copyright Issues ‘on Shifting Sands’ but Legal Protections Are Coming, Experts Tell PGA Produced By Conference

"Renard T. Jenkins — a former Warner Bros. Discovery exec who’s now president and CEO of I2A2 Technologies, Labs and Studios — said his company is working to help create an infrastructure to help with authenticating content.

“Back in the old days, you had watermarks,” he said, noting that file-based content can be altered to remove information about the original creator. “What we are attempting to do is create an infrastructure and ecosystem that would allow us to track every single iteration of a piece of content from its origins all the way through the distribution.” 

For that to happen, the PGA and other organizations would have to agree to a new standard. “It’s a very heavy lift,” he said, comparing the necessary level of cooperation to a cross-mafia agreement, describing it as the “five families of Hollywood coming together.”

He also suggested that blockchain technology could be used to “audit and track” every change to a piece of content. It’s the same tech used for Bitcoin and the much-maligned NFT digital assets."

Saturday, June 8, 2024

NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI; The National Law Review, June 4, 2024

  James G. Gatto of Sheppard, Mullin, Richter & Hampton LLP, The National Law Review; NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI

"The number of bar associations that have issued AI ethics guidance continues to grow, with NJ being the most recent. In its May 2024 report (Report), the NJ Task Force on Artificial Intelligence and the Law made a number of recommendations and findings as detailed below. With this Report, NJ joins the list of other bar associations that have issued AI ethics guidance, including FloridaCaliforniaNew YorkDC as well as the US Patent and Trademark Office. The Report notes that the practice of law is “poised for substantial transformation due to AI,” adding that while the full extent of this transformation remains to be seen, attorneys must keep abreast of and adapt to evolving technological landscapes and embrace opportunities for innovation and specialization in emerging AI-related legal domains.

The Task Force included four workgroups, including: i) Artificial Intelligence and Social Justice Concerns; ii) Artificial Intelligence Products and Services; iii) Education and CLE Programming; and iv) Ethics and Regulatory Issues. Each workgroup made findings and recommendations, some of which are provided below (while trying to avoid duplicating what other bar associations have addressed). Additionally, the Report includes some practical tools including guidance on Essential Factors for Selecting AI Products and Formulating an AI Policy in Legal Firms, provides a Sample Artificial Intelligence and Generative Artificial Intelligence Use Policy and Questions for Vendors When Selecting AI Products and Services, links to which are provided below.

The Report covers many of the expected topics with a focus on:

  • prioritizing AI education, establishing baseline procedures and guidelines, and collaborating with data privacy, cybersecurity, and AI professionals as needed;
  • adopting an AI policy to ensure the responsible integration of AI in legal practice and adherence to ethical and legal standards; and
  • the importance of social justice concerns related to the use of AI, including the importance of transparency in AI software algorithms, bias mitigation, and equitable access to AI tools and the need to review legal AI tools for fairness and accessibility, particularly tools designed for individuals from marginalized or vulnerable communities.

Some of the findings and recommendations are set forth below."

You Can Create Award-Winning Art With AI. Can You Copyright It?; Bloomberg Law, June 5, 2024

Matthew S. Schwartz, Bloomberg Law; You Can Create Award-Winning Art With AI. Can You Copyright It?

"We delved into the controversy surrounding the use of copyrighted material in training AI systems in our first two episodes of this season. Now we shift our focus to the output. Who owns artwork created using artificial intelligence? Should our legal system redefine what constitutes authorship? Or, as AI promises to redefine how we create, will the government cling to historical notions of authorship?

Guests:

  • Jason M. Allen, founder of Art Incarnate
  • Sy Damle, partner in the copyright litigation group at Latham & Watkins
  • Shira Perlmutter, Register of Copyrights and director of the US Copyright Office"

Friday, June 7, 2024

Angry Instagram posts won’t stop Meta AI from using your content; Popular Science, June 5, 2024

Mack DeGeurin, Popular Science; Angry Instagram posts won’t stop Meta AI from using your content

"Meta, the Mark Zuckerberg-owned tech giant behind Instagram, surprised many of the app’s estimated 1.2 billion global users with a shock revelation last month. Images, including original artwork and other creative assets uploaded to the company’s platforms, are now being used to train the company’s AI image generator. That admission, initially made public by Meta executive Chris Cox during an interview with Bloomberg last month, has elicited a fierce backlash from some creators. As of writing, more than 130,000 Instagram users have reshared a message on Instagram telling the company they do not consent to it using their data to train Meta AI. Those pleas, however, are founded on a fundamental misunderstanding of creators’ relationship with extractive social media platforms. These creators already gave away their work, whether they realize it or not."

‘This Is Going to Be Painful’: How a Bold A.I. Device Flopped; The New York Times, June 6, 2024

 Tripp Mickle and , The New York Times ; This Is Going to Be Painful’: How a Bold A.I. Device Flopped

"As of early April, Humane had received around 10,000 orders for the Ai Pin, a small fraction of the 100,000 that it hoped to sell this year, two people familiar with its sales said. In recent months, the company has also grappled with employee departures and changed a return policy to address canceled orders. On Wednesday, it asked customers to stop using the Ai Pin charging case because of a fire risk associated with its battery.

Its setbacks are part of a pattern of stumbles across the world of generative A.I., as companies release unpolished products. Over the past two years, Google has introduced and pared back A.I. search abilities that recommended people eat rocks, Microsoft has trumpeted a Bing chatbot that hallucinated and Samsung has added A.I. features to a smartphone that were called “excellent at times and baffling at others.”"

Thursday, June 6, 2024

Pitt’s national patent ranking improved to No. 14 in 2023; PittWire, June 4, 2024

Mike Yeomans , PittWire; Pitt’s national patent ranking improved to No. 14 in 2023

"Innovators at the University of Pittsburgh received 114 U.S. patents in 2023, up from 105 the year before. Their efforts earned Pitt a No. 14 ranking for utility patents granted among universities nationally, according to a list the National Academy of Inventors (NAI) published in June. 

NAI has published the Top 100 Worldwide Universities list each year since 2013, and last year the organization added the U.S. ranking. Pitt is ranked No. 19 on the global list.

Pitt inventors can find support for bringing their ideas to life at every step of the process in the Office of Innovation and Entrpreneurship (OIE), which tracks patents issued each month and offers commercialization resources, sessions with experts in residence and funding opportunities to the University community...

The Top 100 U.S. Universities list is meant to provide a more focused view of the national innovation landscape and the contributions made by U.S. academic institutions. Pitt ranks just behind the University of Pennsylvania, which earned 119 patents, and ahead of Northwestern University (108 patents), Cornell University (94 patents) and Duke University (92 patents)."

Can you become a lawyer by listening to Taylor Swift?; BBC, June 3, 2024

 Jamie Russell, BBC; Can you become a lawyer by listening to Taylor Swift?

"While Taylor Swift was on her record-breaking Eras Tour, one Swiftie was using the star’s music in a bid to pass her law degree.

Regan Caie, a law student at the University of Glasgow, wrote her fourth-year dissertation about Swift's re-recorded albums and copyright law.

The music star has re-recorded and re-released four of her first six albums over a copyright dispute with producer and artist manager Scooter Braun.

Regan, 21, said her dissertation combined her love of Swift's music with her ambition to eventually specialise in copyright and intellectual property law."

Wednesday, June 5, 2024

Will A.I. Be a Creator or a Destroyer of Worlds?; The New York Times, June 5, 2024

 Thomas B. Edsall, The New York Times ; Will A.I. Be a Creator or a Destroyer of Worlds?

"The advent of A.I. — artificial intelligence — is spurring curiosity and fear. Will A.I. be a creator or a destroyer of worlds?"

Can You Trademark a Potato? Take Our Food-Branding Quiz.; The New York Times, June 4, 2024

 , The New York Times; Can You Trademark a Potato? Take Our Food-Branding Quiz.

America is saturated with food trademarks. The Cronut? Trademarked. Pop-Tarts? Trademarked. Even grapes that taste like cotton candy, and the mash-up of gai lan and broccoli called Broccolini are legally protected.

Yet the celebrity chef David Chang was widely criticized this spring for pressuring small manufacturers to stop using the term “chile crunch.” His business holds the trademark for the spicy condiment, but many people wondered: How can a name common to so many cuisines be owned by one company?

The answers to that question and many more lie in the byzantine deliberations of the U.S. Patent and Trademark Office, in Alexandria, Va. It has registered more than 200,000 food and agricultural trademarks, which means that the name or product — the brand, essentially — is unique enough that no one else is allowed to use it for a similar item. (Recipes can’t be trademarked, but some can be ruled trade secrets, like the formula for Dr Pepper or KFC’s 11 herbs and spices.)

The process of deciding what merits a trademark can be downright Talmudic, starting with the hierarchy of trademark types. The easiest to secure and protect are the completely made-up words that the office calls “fanciful,” like Häagen-Dazs. Next are “arbitrary" names — real words that have nothing to do with the products they identify, like Apple for computers. Harder-to-trademark categories include “suggestive” names, which contain a hint of what the product is, like SweeTarts, and plainly “descriptive” ones, like All-Bran.

There are precise legal requirements to meet, but also room for subjective interpretation. Public perception is the barometer. If a name seems confusing, misleading or too common, it won’t get a trademark. “Our job is to figure out what the American consumer is thinking,” said Amy Cotton, the deputy commissioner for trademark examination.

As a consumer, how good are you at gauging what deserves a trademark?"

OpenAI and Google DeepMind workers warn of AI industry risks in open letter; The Guardian, June 4, 2024

 , The Guardian; OpenAI and Google DeepMind workers warn of AI industry risks in open letter

"A group of current and former employees at prominent artificial intelligence companies issued an open letter on Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers.

The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees – one of whom previously worked at Anthropic."

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

 Varun Magesh∗ Stanford University; Mirac Suzgun, Stanford University; Faiz Surani∗ Stanford University; Christopher D. Manning, Stanford University; Matthew Dahl, Yale University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

GENERATIVE AI IS CREATING A COPYRIGHT CRISIS FOR ARTISTS; Mind Matters, June 3, 2024

Mind Matters; GENERATIVE AI IS CREATING A COPYRIGHT CRISIS FOR ARTISTS

"The problem, Crawford and Schultz say, is that copyright law, as currently framed, does not really protect individuals under these circumstances. That’s not surprising. Copyright dates back to at least 1710 and the issues were very different then.

For one thing, as Jonathan Bartlett pointed out last December, when the New York Times launched a lawsuit for copyright violation against Microsoft and OpenAI, everyone accepted that big search engines have always violated copyright. But if they brought people to your site, while saving and using your content for themselves, you were getting something out of it at least.

But it’s different with generative AI and the chatbot. They use and replace your content. Users are not going back to you for more. OpenAI freely admits that it violates copyright but relies on loopholes to get around legal responsibility.

As the lawsuits pile up, it’s clear that gen AI and chatbots can’t work without these billions of images and texts. So we either do without them or we find a way to compensate the producers."

Adobe gets called out for violating its own AI ethics; Digital Trends, June 3, 2024

  , Digital Trends; Adobe gets called out for violating its own AI ethics

"Last Friday, the estate of famed 20th century American photographer Ansel Adams took to Threads to publicly shame Adobe for allegedly offering AI-genearated art “inspired by” Adams’ catalog of work, stating that the company is “officially on our last nerve with this behavior.”...

Adobe has since removed the offending images, conceding in the Threads conversation that, “this goes against our Generative AI content policy.”

However, the Adams estate seemed unsatisfied with that response, claiming that it had been “in touch directly” with the company “multiple times” since last August. “Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community,” the estate continued, “we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists’ estates to continuously police our IP on your platform, on your terms.”"

AI isn't useless. But is it worth it?; [citation needed], April 17, 2024

 Molly White , [citation needed]; AI isn't useless. But is it worth it?

"When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial. And while I do think that AI tools are more broadly useful than blockchains, they also come with similarly monstrous costs."

Google’s A.I. Search Leaves Publishers Scrambling; The New York Times, June 1, 2024

 Nico Grant and , The New York Times; Google’s A.I. Search Leaves Publishers Scrambling

"In May, Google announced that the A.I.-generated summaries, which compile content from news sites and blogs on the topic being searched, would be made available to everyone in the United States. And that change has Mr. Pine and many other publishing executives worried that the paragraphs pose a big danger to their brittle business model, by sharply reducing the amount of traffic to their sites from Google.

“It potentially chokes off the original creators of the content,” Mr. Pine said. The feature, AI Overviews, felt like another step toward generative A.I. replacing “the publications that they have cannibalized,” he added."

How news coverage, often uncritical, helps build up the AI hype; Reuters Institute, May 20, 2024

 Prof. Rasmus Kleis Nielsen , Reuters Institute; How news coverage, often uncritical, helps build up the AI hype

"“I would put media reporting [about AI] at around two out of 10,” David Reid, professor of Artificial Intelligence at Liverpool Hope University, said to the BBC earlier this year. “When the media talks about AI, they think of it as a single entity. It is not. What I would like to see is more nuanced reporting.”

While some individual journalists and outlets are highly respected for their reporting on AI, overall, social science research on news media coverage of artificial intelligence provides some support for Reid’s assessment.

Some working in the technology industry may feel very put upon – a few years ago Zachary Lipton, then an assistant professor at the machine learning department at Carnegie Mellon University, was quoted in the Guardian calling media coverage of artificial intelligence “sensationalised crap” and likening it to an “AI misinformation epidemic”. In private conversations, many computer scientists and technologists working in the private sector echo his complaints, decrying what several describe as relentlessly negative coverage obsessed with “killer robots.”"

Thursday, May 30, 2024

The media bosses fighting back against AI — and the ones cutting deals; The Washington Post, May 27, 2024

, The Washington Post; The media bosses fighting back against AI — and the ones cutting deals

"The fact that so many media companies are cutting deals with Open AI could “dilute” the leverage that the companies suing it have, Mateen noted. On the other hand, by paying some publishers so much money, Open AI may be undermining its own defense: If it were truly “fair use,” he said, “they’d be confident enough not to pay anything.”"

Wednesday, May 29, 2024

Will the rise of AI spell the end of intellectual property rights?; The Globe and Mail, May 27, 2024

SHEEMA KHAN , The Globe and Mail; Will the rise of AI spell the end of intellectual property rights?

"AI’s first challenge to IP is in the inputs...

Perhaps the question will become: Will IP be the death of AI?...

The second challenge relates to who owns the AI-generated products...

Yet IP rights are key to innovation, as they provide a limited monopoly to monetize investments in research and development. AI represents an existential threat in this regard.

Clearly, the law has not caught up. But sitting idly by is not an option, as there are too many important policy issues at play."

Tuesday, May 28, 2024

Judges in Gen AI Cases Favor Summary Judgment on Copyright; Bloomberg Law, May 28, 2024

 Golriz Chrostowski, Bloomberg Law; Judges in Gen AI Cases Favor Summary Judgment on Copyright 

"Bloomberg Law dockets research found 19 copyright lawsuits against generative AI defendants that are currently pending in federal district courts. These 19 cases are filed in four jurisdictions: the Northern District of California, the District of Delaware, the Southern District of New York, and the Middle District of Tennessee. The 11 judges overseeing these cases have varying degrees of experience deciding summary judgment motions involving copyright law, according to Bloomberg Law’s Litigation Analytics tool...

Despite their varying amounts of experience, it appears that the majority of the generative AI judges are willing to grant in whole or in part motions for summary judgment involving copyright law. However, only Judges Sidney H. Stein, Jon S. Tigar, Waverly D. Crenshaw, Jr., Rakoff, and McMahon have decided motions for summary judgment involving the affirmative defense of fair use. Only three found in favor of fair use, forecasting an uphill battle for generative AI defendants. 

Armed with this kind of knowledge about a judge’s experience with copyright law, practitioners can determine how much information they may need to provide the judge presiding over their case."

Monday, May 27, 2024

‘That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI; The Hill, May 23, 2024

CHRISTOPHER KENNEALLY, The Hill; That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI

"Beyond commercially published books, journals, and newspapers, AI databases derive from a vast online trove of publicly available social media and Wikipedia entries, as well as digitized library and museum collections, court proceedings, and government legislation and regulation.

Consumption of public and private individual data on the “open” web marks an important shift in digital evolution. No one is left out. Consequently, we have all become stakeholders.

AI is now forcing us to consider viewing copyright as a public good...

Statutory licensing schemes for copyright-protected works are already applied to cable television systems and music recordings with great success. Fees collected for AI rights-licensing of publicly available works need not be burdensome. The funds can help to underwrite essential public education in digital literacy and civil discourse online.

OpenAI, along with Meta, Apple, Google, Amazon, and others who stand to benefit, must recognize the debt owed to the American people for the data that fuels their AI solutions."

Friday, May 24, 2024

Navigating the Patchwork of AI Laws, Standards, and Guidance; American Bar Association (ABA), May 9, 2024

 Emily Maxim Lamm , American Bar Association (ABA); Navigating the Patchwork of AI Laws, Standards, and Guidance

"The opening weeks of 2024 have seen a record number of state legislative proposals seeking to regulate artificial intelligence (AI) across different sectors in the United States...

With this type of rapid-fire start to the 2024 legislative season, the AI legal landscape will likely continue evolving across the board. As a result, organizations today are facing a complex and dizzying web of proposed and existing AI laws, standards, and guidance.

This article aims to provide a cohesive overview of this AI patchwork and to help organizations navigate this increasingly intricate terrain. The focus here will be on the implications of the White House AI Executive Order, existing state and local laws in the United States, the European Union’s AI Act, and, finally, governance standards to help bring these diverse elements together within a framework."

‘The Hunt for Gollum’ Was Just Announced. It Was on YouTube in 2009.; The New York Times, May 22, 2024

Stefano Montali, The New York Times ; ‘The Hunt for Gollum’ Was Just Announced. It Was on YouTube in 2009.

"Early on, he had reached an agreement with the Tolkien estate ensuring that his project would respect its intellectual property, and that it would be freely available and noncommercial. “We were lucky that they looked kindly on it as something that was for the fan community,” Bouchard said.

But YouTube denied the appeal. So, like eagles over Mordor, the Ringers, as the fans are known, swooped in. They wrote articles and posted heated comments on Reddit and other sites, calling the removal “deplorable” and “despicable.” Bouchard noted his disappointment on X.

Bouchard quickly received a follow-up email from YouTube: The movie had been reinstated. In an email, Warner Bros. said it had no official comment. YouTube did not reply to requests for comment."

Copyright issues at OpenAI and other AI startups are ramping up; Fortune, May 23, 2024

JESSICA MATHEWS, Fortune ; Copyright issues at OpenAI and other AI startups are ramping up

"Last week, Sony Music Group sent out a letter to more than 700 companies building out or using generative AI models with a clear warning: If you’re going to use Sony’s content, you better have explicit permission."