Wednesday, June 12, 2024

AI Copyright Issues ‘on Shifting Sands’ but Legal Protections Are Coming, Experts Tell PGA Produced By Conference; The Wrap, June 9, 2024

 , The Wrap; AI Copyright Issues ‘on Shifting Sands’ but Legal Protections Are Coming, Experts Tell PGA Produced By Conference

"Renard T. Jenkins — a former Warner Bros. Discovery exec who’s now president and CEO of I2A2 Technologies, Labs and Studios — said his company is working to help create an infrastructure to help with authenticating content.

“Back in the old days, you had watermarks,” he said, noting that file-based content can be altered to remove information about the original creator. “What we are attempting to do is create an infrastructure and ecosystem that would allow us to track every single iteration of a piece of content from its origins all the way through the distribution.” 

For that to happen, the PGA and other organizations would have to agree to a new standard. “It’s a very heavy lift,” he said, comparing the necessary level of cooperation to a cross-mafia agreement, describing it as the “five families of Hollywood coming together.”

He also suggested that blockchain technology could be used to “audit and track” every change to a piece of content. It’s the same tech used for Bitcoin and the much-maligned NFT digital assets."

Saturday, June 8, 2024

NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI; The National Law Review, June 4, 2024

  James G. Gatto of Sheppard, Mullin, Richter & Hampton LLP, The National Law Review; NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI

"The number of bar associations that have issued AI ethics guidance continues to grow, with NJ being the most recent. In its May 2024 report (Report), the NJ Task Force on Artificial Intelligence and the Law made a number of recommendations and findings as detailed below. With this Report, NJ joins the list of other bar associations that have issued AI ethics guidance, including FloridaCaliforniaNew YorkDC as well as the US Patent and Trademark Office. The Report notes that the practice of law is “poised for substantial transformation due to AI,” adding that while the full extent of this transformation remains to be seen, attorneys must keep abreast of and adapt to evolving technological landscapes and embrace opportunities for innovation and specialization in emerging AI-related legal domains.

The Task Force included four workgroups, including: i) Artificial Intelligence and Social Justice Concerns; ii) Artificial Intelligence Products and Services; iii) Education and CLE Programming; and iv) Ethics and Regulatory Issues. Each workgroup made findings and recommendations, some of which are provided below (while trying to avoid duplicating what other bar associations have addressed). Additionally, the Report includes some practical tools including guidance on Essential Factors for Selecting AI Products and Formulating an AI Policy in Legal Firms, provides a Sample Artificial Intelligence and Generative Artificial Intelligence Use Policy and Questions for Vendors When Selecting AI Products and Services, links to which are provided below.

The Report covers many of the expected topics with a focus on:

  • prioritizing AI education, establishing baseline procedures and guidelines, and collaborating with data privacy, cybersecurity, and AI professionals as needed;
  • adopting an AI policy to ensure the responsible integration of AI in legal practice and adherence to ethical and legal standards; and
  • the importance of social justice concerns related to the use of AI, including the importance of transparency in AI software algorithms, bias mitigation, and equitable access to AI tools and the need to review legal AI tools for fairness and accessibility, particularly tools designed for individuals from marginalized or vulnerable communities.

Some of the findings and recommendations are set forth below."

You Can Create Award-Winning Art With AI. Can You Copyright It?; Bloomberg Law, June 5, 2024

Matthew S. Schwartz, Bloomberg Law; You Can Create Award-Winning Art With AI. Can You Copyright It?

"We delved into the controversy surrounding the use of copyrighted material in training AI systems in our first two episodes of this season. Now we shift our focus to the output. Who owns artwork created using artificial intelligence? Should our legal system redefine what constitutes authorship? Or, as AI promises to redefine how we create, will the government cling to historical notions of authorship?

Guests:

  • Jason M. Allen, founder of Art Incarnate
  • Sy Damle, partner in the copyright litigation group at Latham & Watkins
  • Shira Perlmutter, Register of Copyrights and director of the US Copyright Office"

Friday, June 7, 2024

Angry Instagram posts won’t stop Meta AI from using your content; Popular Science, June 5, 2024

Mack DeGeurin, Popular Science; Angry Instagram posts won’t stop Meta AI from using your content

"Meta, the Mark Zuckerberg-owned tech giant behind Instagram, surprised many of the app’s estimated 1.2 billion global users with a shock revelation last month. Images, including original artwork and other creative assets uploaded to the company’s platforms, are now being used to train the company’s AI image generator. That admission, initially made public by Meta executive Chris Cox during an interview with Bloomberg last month, has elicited a fierce backlash from some creators. As of writing, more than 130,000 Instagram users have reshared a message on Instagram telling the company they do not consent to it using their data to train Meta AI. Those pleas, however, are founded on a fundamental misunderstanding of creators’ relationship with extractive social media platforms. These creators already gave away their work, whether they realize it or not."

‘This Is Going to Be Painful’: How a Bold A.I. Device Flopped; The New York Times, June 6, 2024

 Tripp Mickle and , The New York Times ; This Is Going to Be Painful’: How a Bold A.I. Device Flopped

"As of early April, Humane had received around 10,000 orders for the Ai Pin, a small fraction of the 100,000 that it hoped to sell this year, two people familiar with its sales said. In recent months, the company has also grappled with employee departures and changed a return policy to address canceled orders. On Wednesday, it asked customers to stop using the Ai Pin charging case because of a fire risk associated with its battery.

Its setbacks are part of a pattern of stumbles across the world of generative A.I., as companies release unpolished products. Over the past two years, Google has introduced and pared back A.I. search abilities that recommended people eat rocks, Microsoft has trumpeted a Bing chatbot that hallucinated and Samsung has added A.I. features to a smartphone that were called “excellent at times and baffling at others.”"

Thursday, June 6, 2024

Pitt’s national patent ranking improved to No. 14 in 2023; PittWire, June 4, 2024

Mike Yeomans , PittWire; Pitt’s national patent ranking improved to No. 14 in 2023

"Innovators at the University of Pittsburgh received 114 U.S. patents in 2023, up from 105 the year before. Their efforts earned Pitt a No. 14 ranking for utility patents granted among universities nationally, according to a list the National Academy of Inventors (NAI) published in June. 

NAI has published the Top 100 Worldwide Universities list each year since 2013, and last year the organization added the U.S. ranking. Pitt is ranked No. 19 on the global list.

Pitt inventors can find support for bringing their ideas to life at every step of the process in the Office of Innovation and Entrpreneurship (OIE), which tracks patents issued each month and offers commercialization resources, sessions with experts in residence and funding opportunities to the University community...

The Top 100 U.S. Universities list is meant to provide a more focused view of the national innovation landscape and the contributions made by U.S. academic institutions. Pitt ranks just behind the University of Pennsylvania, which earned 119 patents, and ahead of Northwestern University (108 patents), Cornell University (94 patents) and Duke University (92 patents)."

Can you become a lawyer by listening to Taylor Swift?; BBC, June 3, 2024

 Jamie Russell, BBC; Can you become a lawyer by listening to Taylor Swift?

"While Taylor Swift was on her record-breaking Eras Tour, one Swiftie was using the star’s music in a bid to pass her law degree.

Regan Caie, a law student at the University of Glasgow, wrote her fourth-year dissertation about Swift's re-recorded albums and copyright law.

The music star has re-recorded and re-released four of her first six albums over a copyright dispute with producer and artist manager Scooter Braun.

Regan, 21, said her dissertation combined her love of Swift's music with her ambition to eventually specialise in copyright and intellectual property law."

Wednesday, June 5, 2024

Will A.I. Be a Creator or a Destroyer of Worlds?; The New York Times, June 5, 2024

 Thomas B. Edsall, The New York Times ; Will A.I. Be a Creator or a Destroyer of Worlds?

"The advent of A.I. — artificial intelligence — is spurring curiosity and fear. Will A.I. be a creator or a destroyer of worlds?"

Can You Trademark a Potato? Take Our Food-Branding Quiz.; The New York Times, June 4, 2024

 , The New York Times; Can You Trademark a Potato? Take Our Food-Branding Quiz.

America is saturated with food trademarks. The Cronut? Trademarked. Pop-Tarts? Trademarked. Even grapes that taste like cotton candy, and the mash-up of gai lan and broccoli called Broccolini are legally protected.

Yet the celebrity chef David Chang was widely criticized this spring for pressuring small manufacturers to stop using the term “chile crunch.” His business holds the trademark for the spicy condiment, but many people wondered: How can a name common to so many cuisines be owned by one company?

The answers to that question and many more lie in the byzantine deliberations of the U.S. Patent and Trademark Office, in Alexandria, Va. It has registered more than 200,000 food and agricultural trademarks, which means that the name or product — the brand, essentially — is unique enough that no one else is allowed to use it for a similar item. (Recipes can’t be trademarked, but some can be ruled trade secrets, like the formula for Dr Pepper or KFC’s 11 herbs and spices.)

The process of deciding what merits a trademark can be downright Talmudic, starting with the hierarchy of trademark types. The easiest to secure and protect are the completely made-up words that the office calls “fanciful,” like Häagen-Dazs. Next are “arbitrary" names — real words that have nothing to do with the products they identify, like Apple for computers. Harder-to-trademark categories include “suggestive” names, which contain a hint of what the product is, like SweeTarts, and plainly “descriptive” ones, like All-Bran.

There are precise legal requirements to meet, but also room for subjective interpretation. Public perception is the barometer. If a name seems confusing, misleading or too common, it won’t get a trademark. “Our job is to figure out what the American consumer is thinking,” said Amy Cotton, the deputy commissioner for trademark examination.

As a consumer, how good are you at gauging what deserves a trademark?"

OpenAI and Google DeepMind workers warn of AI industry risks in open letter; The Guardian, June 4, 2024

 , The Guardian; OpenAI and Google DeepMind workers warn of AI industry risks in open letter

"A group of current and former employees at prominent artificial intelligence companies issued an open letter on Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers.

The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees – one of whom previously worked at Anthropic."

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

 Varun Magesh∗ Stanford University; Mirac Suzgun, Stanford University; Faiz Surani∗ Stanford University; Christopher D. Manning, Stanford University; Matthew Dahl, Yale University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

GENERATIVE AI IS CREATING A COPYRIGHT CRISIS FOR ARTISTS; Mind Matters, June 3, 2024

Mind Matters; GENERATIVE AI IS CREATING A COPYRIGHT CRISIS FOR ARTISTS

"The problem, Crawford and Schultz say, is that copyright law, as currently framed, does not really protect individuals under these circumstances. That’s not surprising. Copyright dates back to at least 1710 and the issues were very different then.

For one thing, as Jonathan Bartlett pointed out last December, when the New York Times launched a lawsuit for copyright violation against Microsoft and OpenAI, everyone accepted that big search engines have always violated copyright. But if they brought people to your site, while saving and using your content for themselves, you were getting something out of it at least.

But it’s different with generative AI and the chatbot. They use and replace your content. Users are not going back to you for more. OpenAI freely admits that it violates copyright but relies on loopholes to get around legal responsibility.

As the lawsuits pile up, it’s clear that gen AI and chatbots can’t work without these billions of images and texts. So we either do without them or we find a way to compensate the producers."

Adobe gets called out for violating its own AI ethics; Digital Trends, June 3, 2024

  , Digital Trends; Adobe gets called out for violating its own AI ethics

"Last Friday, the estate of famed 20th century American photographer Ansel Adams took to Threads to publicly shame Adobe for allegedly offering AI-genearated art “inspired by” Adams’ catalog of work, stating that the company is “officially on our last nerve with this behavior.”...

Adobe has since removed the offending images, conceding in the Threads conversation that, “this goes against our Generative AI content policy.”

However, the Adams estate seemed unsatisfied with that response, claiming that it had been “in touch directly” with the company “multiple times” since last August. “Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community,” the estate continued, “we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists’ estates to continuously police our IP on your platform, on your terms.”"

AI isn't useless. But is it worth it?; [citation needed], April 17, 2024

 Molly White , [citation needed]; AI isn't useless. But is it worth it?

"When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial. And while I do think that AI tools are more broadly useful than blockchains, they also come with similarly monstrous costs."

Google’s A.I. Search Leaves Publishers Scrambling; The New York Times, June 1, 2024

 Nico Grant and , The New York Times; Google’s A.I. Search Leaves Publishers Scrambling

"In May, Google announced that the A.I.-generated summaries, which compile content from news sites and blogs on the topic being searched, would be made available to everyone in the United States. And that change has Mr. Pine and many other publishing executives worried that the paragraphs pose a big danger to their brittle business model, by sharply reducing the amount of traffic to their sites from Google.

“It potentially chokes off the original creators of the content,” Mr. Pine said. The feature, AI Overviews, felt like another step toward generative A.I. replacing “the publications that they have cannibalized,” he added."

How news coverage, often uncritical, helps build up the AI hype; Reuters Institute, May 20, 2024

 Prof. Rasmus Kleis Nielsen , Reuters Institute; How news coverage, often uncritical, helps build up the AI hype

"“I would put media reporting [about AI] at around two out of 10,” David Reid, professor of Artificial Intelligence at Liverpool Hope University, said to the BBC earlier this year. “When the media talks about AI, they think of it as a single entity. It is not. What I would like to see is more nuanced reporting.”

While some individual journalists and outlets are highly respected for their reporting on AI, overall, social science research on news media coverage of artificial intelligence provides some support for Reid’s assessment.

Some working in the technology industry may feel very put upon – a few years ago Zachary Lipton, then an assistant professor at the machine learning department at Carnegie Mellon University, was quoted in the Guardian calling media coverage of artificial intelligence “sensationalised crap” and likening it to an “AI misinformation epidemic”. In private conversations, many computer scientists and technologists working in the private sector echo his complaints, decrying what several describe as relentlessly negative coverage obsessed with “killer robots.”"

Thursday, May 30, 2024

The media bosses fighting back against AI — and the ones cutting deals; The Washington Post, May 27, 2024

, The Washington Post; The media bosses fighting back against AI — and the ones cutting deals

"The fact that so many media companies are cutting deals with Open AI could “dilute” the leverage that the companies suing it have, Mateen noted. On the other hand, by paying some publishers so much money, Open AI may be undermining its own defense: If it were truly “fair use,” he said, “they’d be confident enough not to pay anything.”"

Wednesday, May 29, 2024

Will the rise of AI spell the end of intellectual property rights?; The Globe and Mail, May 27, 2024

SHEEMA KHAN , The Globe and Mail; Will the rise of AI spell the end of intellectual property rights?

"AI’s first challenge to IP is in the inputs...

Perhaps the question will become: Will IP be the death of AI?...

The second challenge relates to who owns the AI-generated products...

Yet IP rights are key to innovation, as they provide a limited monopoly to monetize investments in research and development. AI represents an existential threat in this regard.

Clearly, the law has not caught up. But sitting idly by is not an option, as there are too many important policy issues at play."

Tuesday, May 28, 2024

Judges in Gen AI Cases Favor Summary Judgment on Copyright; Bloomberg Law, May 28, 2024

 Golriz Chrostowski, Bloomberg Law; Judges in Gen AI Cases Favor Summary Judgment on Copyright 

"Bloomberg Law dockets research found 19 copyright lawsuits against generative AI defendants that are currently pending in federal district courts. These 19 cases are filed in four jurisdictions: the Northern District of California, the District of Delaware, the Southern District of New York, and the Middle District of Tennessee. The 11 judges overseeing these cases have varying degrees of experience deciding summary judgment motions involving copyright law, according to Bloomberg Law’s Litigation Analytics tool...

Despite their varying amounts of experience, it appears that the majority of the generative AI judges are willing to grant in whole or in part motions for summary judgment involving copyright law. However, only Judges Sidney H. Stein, Jon S. Tigar, Waverly D. Crenshaw, Jr., Rakoff, and McMahon have decided motions for summary judgment involving the affirmative defense of fair use. Only three found in favor of fair use, forecasting an uphill battle for generative AI defendants. 

Armed with this kind of knowledge about a judge’s experience with copyright law, practitioners can determine how much information they may need to provide the judge presiding over their case."

Monday, May 27, 2024

‘That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI; The Hill, May 23, 2024

CHRISTOPHER KENNEALLY, The Hill; That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI

"Beyond commercially published books, journals, and newspapers, AI databases derive from a vast online trove of publicly available social media and Wikipedia entries, as well as digitized library and museum collections, court proceedings, and government legislation and regulation.

Consumption of public and private individual data on the “open” web marks an important shift in digital evolution. No one is left out. Consequently, we have all become stakeholders.

AI is now forcing us to consider viewing copyright as a public good...

Statutory licensing schemes for copyright-protected works are already applied to cable television systems and music recordings with great success. Fees collected for AI rights-licensing of publicly available works need not be burdensome. The funds can help to underwrite essential public education in digital literacy and civil discourse online.

OpenAI, along with Meta, Apple, Google, Amazon, and others who stand to benefit, must recognize the debt owed to the American people for the data that fuels their AI solutions."

Friday, May 24, 2024

Navigating the Patchwork of AI Laws, Standards, and Guidance; American Bar Association (ABA), May 9, 2024

 Emily Maxim Lamm , American Bar Association (ABA); Navigating the Patchwork of AI Laws, Standards, and Guidance

"The opening weeks of 2024 have seen a record number of state legislative proposals seeking to regulate artificial intelligence (AI) across different sectors in the United States...

With this type of rapid-fire start to the 2024 legislative season, the AI legal landscape will likely continue evolving across the board. As a result, organizations today are facing a complex and dizzying web of proposed and existing AI laws, standards, and guidance.

This article aims to provide a cohesive overview of this AI patchwork and to help organizations navigate this increasingly intricate terrain. The focus here will be on the implications of the White House AI Executive Order, existing state and local laws in the United States, the European Union’s AI Act, and, finally, governance standards to help bring these diverse elements together within a framework."

‘The Hunt for Gollum’ Was Just Announced. It Was on YouTube in 2009.; The New York Times, May 22, 2024

Stefano Montali, The New York Times ; ‘The Hunt for Gollum’ Was Just Announced. It Was on YouTube in 2009.

"Early on, he had reached an agreement with the Tolkien estate ensuring that his project would respect its intellectual property, and that it would be freely available and noncommercial. “We were lucky that they looked kindly on it as something that was for the fan community,” Bouchard said.

But YouTube denied the appeal. So, like eagles over Mordor, the Ringers, as the fans are known, swooped in. They wrote articles and posted heated comments on Reddit and other sites, calling the removal “deplorable” and “despicable.” Bouchard noted his disappointment on X.

Bouchard quickly received a follow-up email from YouTube: The movie had been reinstated. In an email, Warner Bros. said it had no official comment. YouTube did not reply to requests for comment."

Copyright issues at OpenAI and other AI startups are ramping up; Fortune, May 23, 2024

JESSICA MATHEWS, Fortune ; Copyright issues at OpenAI and other AI startups are ramping up

"Last week, Sony Music Group sent out a letter to more than 700 companies building out or using generative AI models with a clear warning: If you’re going to use Sony’s content, you better have explicit permission."

Thursday, May 23, 2024

TIME FOR A REFRESH; Creative Commons (CC), May 17, 2024

 Anna Tumadóttir, Creative Commons (CC) ; TIME FOR A REFRESH

"The CC team has been evaluating our progress toward our 2021-2025 strategy. Through that process, we have noticed the ways we have been organically adjusting to the social and technical shifts around us, as well as the ebbs and flows of funding availability. It would be an understatement to say that much has changed since we developed the strategy in 2020 and launched it in 2021. Turns out that our predictions and plans set forth in 2020 are not as helpful in the reality of 2024 and likely even less so for 2025 and beyond.

Rather than continuing to progress through the existing strategy, we have determined that the stronger, and dare we say more strategic, approach is to conduct a strategy refresh.

Refreshing Our Strategy

Organizational strategies are meant to be living documents that guide the efforts of the organization towards its north star—the mission and vision. What we thought would be a strategy that would guide us into 2025 has actually been the foundation of us better understanding CC’s role in a post-pandemic time of significant technological development with generative AI, a much needed social and racial justice reckoning globally, and a rapidly shifting socio-economic environment, all affecting our team and our community.

This strategic refresh process will build on past strategies, leveraging what has worked and learning from what hasn’t. Rather than a strategic shift for CC, we are steering this process with the goal of returning to the core of CC, as a movement, as a community, and as legal and technical infrastructure. Our goal is to emerge with an updated strategy that takes into account generative AI alongside the human act of creating and sharing; paired with a sound operational and sustainable organizational layer.

Intentional Engagement with our Community

Community involvement and engagement with this process will be critical for its success. As the CC team and Board grapple with the directions that this strategic refresh will take, we want to hear from you. What role do you see for CC in the future? How can CC support you and your community in stewarding openness practices and making available the knowledge that will help to solve the greatest challenges of our time? What are the elements that make a strong and effective CC, today, tomorrow, and into the future?

In June, we will begin scheduling a series of open community and stakeholder sessions to be held throughout July and August. Participation information will be shared on our blog and via social media. If you wish to indicate your early interest in participating, please sign up to our mailing list. We are looking forward to connecting with you on this process and we can’t wait to hear your insights on the future of CC."

US intelligence agencies’ embrace of generative AI is at once wary and urgent; Associated Press, May 23, 2024

 FRANK BAJAK , Associated Press; US intelligence agencies’ embrace of generative AI is at once wary and urgent

"The CIA’s inaugural chief technology officer, Nand Mulchandani, thinks that because gen AI models “hallucinate” they are best treated as a “crazy, drunk friend” — capable of great insight and creativity but also bias-prone fibbers. There are also security and privacy issues: adversaries could steal and poison them, and they may contain sensitive personal data that officers aren’t authorized to see.

That’s not stopping the experimentation, though, which is mostly happening in secret. 

An exception: Thousands of analysts across the 18 U.S. intelligence agencies now use a CIA-developed gen AI called Osiris. It runs on unclassified and publicly or commercially available data — what’s known as open-source. It writes annotated summaries and its chatbot function lets analysts go deeper with queries...

Another worry: Ensuring the privacy of “U.S. persons” whose data may be embedded in a large-language model.

“If you speak to any researcher or developer that is training a large-language model, and ask them if it is possible to basically kind of delete one individual piece of information from an LLM and make it forget that -- and have a robust empirical guarantee of that forgetting -- that is not a thing that is possible,” John Beieler, AI lead at the Office of the Director of National Intelligence, said in an interview.

It’s one reason the intelligence community is not in “move-fast-and-break-things” mode on gen AI adoption."

OpenAI Strikes a Deal to License News Corp Content; The New York Times, May 22, 2024

Katie Robertson , The New York Times; OpenAI Strikes a Deal to License News Corp Content

"News Corp, the Murdoch-owned empire of publications like The Wall Street Journal and The New York Post, announced on Wednesday that it had agreed to a deal with OpenAI to share its content to train and service artificial intelligence chatbots.

News Corp said the multiyear agreement would allow OpenAI to use current and archived news content from News Corp’s major news outlets, including brands in the United States, United Kingdom and Australia as well as MarketWatch and Barron’s. The agreement does not include content from News Corp’s other businesses, such as its digital real estate services or HarperCollins...

Many publishers have worried about the threat to their business posed by generative A.I., which uses copyrighted content to train its models and service its chatbots."

Wednesday, May 22, 2024

Machine ‘Unlearning’ Helps Generative AI ‘Forget’ Copyright-Protected and Violent Content; UT News, The University of Texas at Austin, May 21, 2024

UT News, The University of Texas at Austin ; Machine ‘Unlearning’ Helps Generative AI ‘Forget’ Copyright-Protected and Violent Content

"When people learn things they should not know, getting them to forget that information can be tough. This is also true of rapidly growing artificial intelligence programs that are trained to think as we do, and it has become a problem as they run into challenges based on the use of copyright-protected material and privacy issues.

To respond to this challenge, researchers at The University of Texas at Austin have developed what they believe is the first “machine unlearning” method applied to image-based generative AI. This method offers the ability to look under the hood and actively block and remove any violent images or copyrighted works without losing the rest of the information in the model.

“When you train these models on such massive data sets, you’re bound to include some data that is undesirable,” said Radu Marculescu, a professor in the Cockrell School of Engineering’s Chandra Family Department of Electrical and Computer Engineering and one of the leaders on the project. “Previously, the only way to remove problematic content was to scrap everything, start anew, manually take out all that data and retrain the model. Our approach offers the opportunity to do this without having to retrain the model from scratch.”"

Tuesday, May 21, 2024

Korea to establish AI copyright system by year-end; The Korea Times, May 21, 2024

Baek Byung-yeul , The Korea Times; Korea to establish AI copyright system by year-end

"Among the eight key policy tasks, the government plans to reform by the end of this year the copyright system over content created by generative AI.

“The government will promptly revise the AI copyright system to bolster the development of the AI-based creative industry,” the ministry said."

Saturday, May 18, 2024

Stability AI, Midjourney should face artists' copyright case, judge says; Reuters, May 8, 2024

, Reuters; Stability AI, Midjourney should face artists' copyright case, judge says

"A California federal judge said he was inclined to green-light a copyright lawsuit against Stability AI, Midjourney and other companies accused of misusing visual artists' work to train their artificial intelligence-based image generation systems.

U.S. District Judge William Orrick said on Tuesday that the ten artists behind the lawsuit had plausibly argued that Stability, Midjourney, DeviantArt and Runway AI copied and stored their work on company servers and could be liable for using it without permission...

Orrick also said that he was likely to dismiss some of the artists' related claims but allow their allegations that the companies violated their trademark rights and falsely implied that they endorsed the systems.

The case is Andersen v. Stability AI, U.S. District Court for the Northern District of California, No. 3:23-cv-00201."