Wednesday, June 5, 2024

OpenAI and Google DeepMind workers warn of AI industry risks in open letter; The Guardian, June 4, 2024

 , The Guardian; OpenAI and Google DeepMind workers warn of AI industry risks in open letter

"A group of current and former employees at prominent artificial intelligence companies issued an open letter on Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers.

The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees – one of whom previously worked at Anthropic."

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

 Varun Magesh∗ Stanford University; Mirac Suzgun, Stanford University; Faiz Surani∗ Stanford University; Christopher D. Manning, Stanford University; Matthew Dahl, Yale University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

GENERATIVE AI IS CREATING A COPYRIGHT CRISIS FOR ARTISTS; Mind Matters, June 3, 2024

Mind Matters; GENERATIVE AI IS CREATING A COPYRIGHT CRISIS FOR ARTISTS

"The problem, Crawford and Schultz say, is that copyright law, as currently framed, does not really protect individuals under these circumstances. That’s not surprising. Copyright dates back to at least 1710 and the issues were very different then.

For one thing, as Jonathan Bartlett pointed out last December, when the New York Times launched a lawsuit for copyright violation against Microsoft and OpenAI, everyone accepted that big search engines have always violated copyright. But if they brought people to your site, while saving and using your content for themselves, you were getting something out of it at least.

But it’s different with generative AI and the chatbot. They use and replace your content. Users are not going back to you for more. OpenAI freely admits that it violates copyright but relies on loopholes to get around legal responsibility.

As the lawsuits pile up, it’s clear that gen AI and chatbots can’t work without these billions of images and texts. So we either do without them or we find a way to compensate the producers."

Adobe gets called out for violating its own AI ethics; Digital Trends, June 3, 2024

  , Digital Trends; Adobe gets called out for violating its own AI ethics

"Last Friday, the estate of famed 20th century American photographer Ansel Adams took to Threads to publicly shame Adobe for allegedly offering AI-genearated art “inspired by” Adams’ catalog of work, stating that the company is “officially on our last nerve with this behavior.”...

Adobe has since removed the offending images, conceding in the Threads conversation that, “this goes against our Generative AI content policy.”

However, the Adams estate seemed unsatisfied with that response, claiming that it had been “in touch directly” with the company “multiple times” since last August. “Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community,” the estate continued, “we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists’ estates to continuously police our IP on your platform, on your terms.”"

AI isn't useless. But is it worth it?; [citation needed], April 17, 2024

 Molly White , [citation needed]; AI isn't useless. But is it worth it?

"When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial. And while I do think that AI tools are more broadly useful than blockchains, they also come with similarly monstrous costs."

Google’s A.I. Search Leaves Publishers Scrambling; The New York Times, June 1, 2024

 Nico Grant and , The New York Times; Google’s A.I. Search Leaves Publishers Scrambling

"In May, Google announced that the A.I.-generated summaries, which compile content from news sites and blogs on the topic being searched, would be made available to everyone in the United States. And that change has Mr. Pine and many other publishing executives worried that the paragraphs pose a big danger to their brittle business model, by sharply reducing the amount of traffic to their sites from Google.

“It potentially chokes off the original creators of the content,” Mr. Pine said. The feature, AI Overviews, felt like another step toward generative A.I. replacing “the publications that they have cannibalized,” he added."

How news coverage, often uncritical, helps build up the AI hype; Reuters Institute, May 20, 2024

 Prof. Rasmus Kleis Nielsen , Reuters Institute; How news coverage, often uncritical, helps build up the AI hype

"“I would put media reporting [about AI] at around two out of 10,” David Reid, professor of Artificial Intelligence at Liverpool Hope University, said to the BBC earlier this year. “When the media talks about AI, they think of it as a single entity. It is not. What I would like to see is more nuanced reporting.”

While some individual journalists and outlets are highly respected for their reporting on AI, overall, social science research on news media coverage of artificial intelligence provides some support for Reid’s assessment.

Some working in the technology industry may feel very put upon – a few years ago Zachary Lipton, then an assistant professor at the machine learning department at Carnegie Mellon University, was quoted in the Guardian calling media coverage of artificial intelligence “sensationalised crap” and likening it to an “AI misinformation epidemic”. In private conversations, many computer scientists and technologists working in the private sector echo his complaints, decrying what several describe as relentlessly negative coverage obsessed with “killer robots.”"

Thursday, May 30, 2024

The media bosses fighting back against AI — and the ones cutting deals; The Washington Post, May 27, 2024

, The Washington Post; The media bosses fighting back against AI — and the ones cutting deals

"The fact that so many media companies are cutting deals with Open AI could “dilute” the leverage that the companies suing it have, Mateen noted. On the other hand, by paying some publishers so much money, Open AI may be undermining its own defense: If it were truly “fair use,” he said, “they’d be confident enough not to pay anything.”"

Wednesday, May 29, 2024

Will the rise of AI spell the end of intellectual property rights?; The Globe and Mail, May 27, 2024

SHEEMA KHAN , The Globe and Mail; Will the rise of AI spell the end of intellectual property rights?

"AI’s first challenge to IP is in the inputs...

Perhaps the question will become: Will IP be the death of AI?...

The second challenge relates to who owns the AI-generated products...

Yet IP rights are key to innovation, as they provide a limited monopoly to monetize investments in research and development. AI represents an existential threat in this regard.

Clearly, the law has not caught up. But sitting idly by is not an option, as there are too many important policy issues at play."

Tuesday, May 28, 2024

Judges in Gen AI Cases Favor Summary Judgment on Copyright; Bloomberg Law, May 28, 2024

 Golriz Chrostowski, Bloomberg Law; Judges in Gen AI Cases Favor Summary Judgment on Copyright 

"Bloomberg Law dockets research found 19 copyright lawsuits against generative AI defendants that are currently pending in federal district courts. These 19 cases are filed in four jurisdictions: the Northern District of California, the District of Delaware, the Southern District of New York, and the Middle District of Tennessee. The 11 judges overseeing these cases have varying degrees of experience deciding summary judgment motions involving copyright law, according to Bloomberg Law’s Litigation Analytics tool...

Despite their varying amounts of experience, it appears that the majority of the generative AI judges are willing to grant in whole or in part motions for summary judgment involving copyright law. However, only Judges Sidney H. Stein, Jon S. Tigar, Waverly D. Crenshaw, Jr., Rakoff, and McMahon have decided motions for summary judgment involving the affirmative defense of fair use. Only three found in favor of fair use, forecasting an uphill battle for generative AI defendants. 

Armed with this kind of knowledge about a judge’s experience with copyright law, practitioners can determine how much information they may need to provide the judge presiding over their case."

Monday, May 27, 2024

‘That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI; The Hill, May 23, 2024

CHRISTOPHER KENNEALLY, The Hill; That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI

"Beyond commercially published books, journals, and newspapers, AI databases derive from a vast online trove of publicly available social media and Wikipedia entries, as well as digitized library and museum collections, court proceedings, and government legislation and regulation.

Consumption of public and private individual data on the “open” web marks an important shift in digital evolution. No one is left out. Consequently, we have all become stakeholders.

AI is now forcing us to consider viewing copyright as a public good...

Statutory licensing schemes for copyright-protected works are already applied to cable television systems and music recordings with great success. Fees collected for AI rights-licensing of publicly available works need not be burdensome. The funds can help to underwrite essential public education in digital literacy and civil discourse online.

OpenAI, along with Meta, Apple, Google, Amazon, and others who stand to benefit, must recognize the debt owed to the American people for the data that fuels their AI solutions."

Friday, May 24, 2024

Navigating the Patchwork of AI Laws, Standards, and Guidance; American Bar Association (ABA), May 9, 2024

 Emily Maxim Lamm , American Bar Association (ABA); Navigating the Patchwork of AI Laws, Standards, and Guidance

"The opening weeks of 2024 have seen a record number of state legislative proposals seeking to regulate artificial intelligence (AI) across different sectors in the United States...

With this type of rapid-fire start to the 2024 legislative season, the AI legal landscape will likely continue evolving across the board. As a result, organizations today are facing a complex and dizzying web of proposed and existing AI laws, standards, and guidance.

This article aims to provide a cohesive overview of this AI patchwork and to help organizations navigate this increasingly intricate terrain. The focus here will be on the implications of the White House AI Executive Order, existing state and local laws in the United States, the European Union’s AI Act, and, finally, governance standards to help bring these diverse elements together within a framework."

‘The Hunt for Gollum’ Was Just Announced. It Was on YouTube in 2009.; The New York Times, May 22, 2024

Stefano Montali, The New York Times ; ‘The Hunt for Gollum’ Was Just Announced. It Was on YouTube in 2009.

"Early on, he had reached an agreement with the Tolkien estate ensuring that his project would respect its intellectual property, and that it would be freely available and noncommercial. “We were lucky that they looked kindly on it as something that was for the fan community,” Bouchard said.

But YouTube denied the appeal. So, like eagles over Mordor, the Ringers, as the fans are known, swooped in. They wrote articles and posted heated comments on Reddit and other sites, calling the removal “deplorable” and “despicable.” Bouchard noted his disappointment on X.

Bouchard quickly received a follow-up email from YouTube: The movie had been reinstated. In an email, Warner Bros. said it had no official comment. YouTube did not reply to requests for comment."

Copyright issues at OpenAI and other AI startups are ramping up; Fortune, May 23, 2024

JESSICA MATHEWS, Fortune ; Copyright issues at OpenAI and other AI startups are ramping up

"Last week, Sony Music Group sent out a letter to more than 700 companies building out or using generative AI models with a clear warning: If you’re going to use Sony’s content, you better have explicit permission."

Thursday, May 23, 2024

TIME FOR A REFRESH; Creative Commons (CC), May 17, 2024

 Anna Tumadóttir, Creative Commons (CC) ; TIME FOR A REFRESH

"The CC team has been evaluating our progress toward our 2021-2025 strategy. Through that process, we have noticed the ways we have been organically adjusting to the social and technical shifts around us, as well as the ebbs and flows of funding availability. It would be an understatement to say that much has changed since we developed the strategy in 2020 and launched it in 2021. Turns out that our predictions and plans set forth in 2020 are not as helpful in the reality of 2024 and likely even less so for 2025 and beyond.

Rather than continuing to progress through the existing strategy, we have determined that the stronger, and dare we say more strategic, approach is to conduct a strategy refresh.

Refreshing Our Strategy

Organizational strategies are meant to be living documents that guide the efforts of the organization towards its north star—the mission and vision. What we thought would be a strategy that would guide us into 2025 has actually been the foundation of us better understanding CC’s role in a post-pandemic time of significant technological development with generative AI, a much needed social and racial justice reckoning globally, and a rapidly shifting socio-economic environment, all affecting our team and our community.

This strategic refresh process will build on past strategies, leveraging what has worked and learning from what hasn’t. Rather than a strategic shift for CC, we are steering this process with the goal of returning to the core of CC, as a movement, as a community, and as legal and technical infrastructure. Our goal is to emerge with an updated strategy that takes into account generative AI alongside the human act of creating and sharing; paired with a sound operational and sustainable organizational layer.

Intentional Engagement with our Community

Community involvement and engagement with this process will be critical for its success. As the CC team and Board grapple with the directions that this strategic refresh will take, we want to hear from you. What role do you see for CC in the future? How can CC support you and your community in stewarding openness practices and making available the knowledge that will help to solve the greatest challenges of our time? What are the elements that make a strong and effective CC, today, tomorrow, and into the future?

In June, we will begin scheduling a series of open community and stakeholder sessions to be held throughout July and August. Participation information will be shared on our blog and via social media. If you wish to indicate your early interest in participating, please sign up to our mailing list. We are looking forward to connecting with you on this process and we can’t wait to hear your insights on the future of CC."

US intelligence agencies’ embrace of generative AI is at once wary and urgent; Associated Press, May 23, 2024

 FRANK BAJAK , Associated Press; US intelligence agencies’ embrace of generative AI is at once wary and urgent

"The CIA’s inaugural chief technology officer, Nand Mulchandani, thinks that because gen AI models “hallucinate” they are best treated as a “crazy, drunk friend” — capable of great insight and creativity but also bias-prone fibbers. There are also security and privacy issues: adversaries could steal and poison them, and they may contain sensitive personal data that officers aren’t authorized to see.

That’s not stopping the experimentation, though, which is mostly happening in secret. 

An exception: Thousands of analysts across the 18 U.S. intelligence agencies now use a CIA-developed gen AI called Osiris. It runs on unclassified and publicly or commercially available data — what’s known as open-source. It writes annotated summaries and its chatbot function lets analysts go deeper with queries...

Another worry: Ensuring the privacy of “U.S. persons” whose data may be embedded in a large-language model.

“If you speak to any researcher or developer that is training a large-language model, and ask them if it is possible to basically kind of delete one individual piece of information from an LLM and make it forget that -- and have a robust empirical guarantee of that forgetting -- that is not a thing that is possible,” John Beieler, AI lead at the Office of the Director of National Intelligence, said in an interview.

It’s one reason the intelligence community is not in “move-fast-and-break-things” mode on gen AI adoption."

OpenAI Strikes a Deal to License News Corp Content; The New York Times, May 22, 2024

Katie Robertson , The New York Times; OpenAI Strikes a Deal to License News Corp Content

"News Corp, the Murdoch-owned empire of publications like The Wall Street Journal and The New York Post, announced on Wednesday that it had agreed to a deal with OpenAI to share its content to train and service artificial intelligence chatbots.

News Corp said the multiyear agreement would allow OpenAI to use current and archived news content from News Corp’s major news outlets, including brands in the United States, United Kingdom and Australia as well as MarketWatch and Barron’s. The agreement does not include content from News Corp’s other businesses, such as its digital real estate services or HarperCollins...

Many publishers have worried about the threat to their business posed by generative A.I., which uses copyrighted content to train its models and service its chatbots."

Wednesday, May 22, 2024

Machine ‘Unlearning’ Helps Generative AI ‘Forget’ Copyright-Protected and Violent Content; UT News, The University of Texas at Austin, May 21, 2024

UT News, The University of Texas at Austin ; Machine ‘Unlearning’ Helps Generative AI ‘Forget’ Copyright-Protected and Violent Content

"When people learn things they should not know, getting them to forget that information can be tough. This is also true of rapidly growing artificial intelligence programs that are trained to think as we do, and it has become a problem as they run into challenges based on the use of copyright-protected material and privacy issues.

To respond to this challenge, researchers at The University of Texas at Austin have developed what they believe is the first “machine unlearning” method applied to image-based generative AI. This method offers the ability to look under the hood and actively block and remove any violent images or copyrighted works without losing the rest of the information in the model.

“When you train these models on such massive data sets, you’re bound to include some data that is undesirable,” said Radu Marculescu, a professor in the Cockrell School of Engineering’s Chandra Family Department of Electrical and Computer Engineering and one of the leaders on the project. “Previously, the only way to remove problematic content was to scrap everything, start anew, manually take out all that data and retrain the model. Our approach offers the opportunity to do this without having to retrain the model from scratch.”"

Tuesday, May 21, 2024

Korea to establish AI copyright system by year-end; The Korea Times, May 21, 2024

Baek Byung-yeul , The Korea Times; Korea to establish AI copyright system by year-end

"Among the eight key policy tasks, the government plans to reform by the end of this year the copyright system over content created by generative AI.

“The government will promptly revise the AI copyright system to bolster the development of the AI-based creative industry,” the ministry said."

Saturday, May 18, 2024

Stability AI, Midjourney should face artists' copyright case, judge says; Reuters, May 8, 2024

, Reuters; Stability AI, Midjourney should face artists' copyright case, judge says

"A California federal judge said he was inclined to green-light a copyright lawsuit against Stability AI, Midjourney and other companies accused of misusing visual artists' work to train their artificial intelligence-based image generation systems.

U.S. District Judge William Orrick said on Tuesday that the ten artists behind the lawsuit had plausibly argued that Stability, Midjourney, DeviantArt and Runway AI copied and stored their work on company servers and could be liable for using it without permission...

Orrick also said that he was likely to dismiss some of the artists' related claims but allow their allegations that the companies violated their trademark rights and falsely implied that they endorsed the systems.

The case is Andersen v. Stability AI, U.S. District Court for the Northern District of California, No. 3:23-cv-00201."

AI/IP Issues Part 2: What Current Litigation Can Tell Us About Where IP in AI Is Heading; LexisNexis, Wednesday, July 24, 2024 1 PM EDT, 10 AM PDT

LexisNexis ; AI/IP Issues Part 2: What Current Litigation Can Tell Us About Where IP in AI Is Heading

"Recent litigation involving generative artificial intelligence (AI) and intellectual property highlights emerging issues that companies need to understand. This webcast explores the litigation trends around patent eligibility and novelty issues, copyright and the fair use defense, and trade secret misappropriation to understand how these cases can help in-house counsel develop internal policies to best protect the company’s intellectual property. Additionally, these current cases can help corporate counsel anticipate challenges and make informed decisions when adopting AI.  

Attend to discover insights into the following trends:

  • Patent litigation reveals uncertainties around patent eligibility and novelty where inventions and/or inventors involve generative AI. 
  • Trade secret litigation is trending up as companies have started to maximize technology protection with trade secrets and recent jury awards have made it a valuable form of intellectual property protection.  
  • Copyright cases are testing the boundaries of fair use related to generative AI."

Reddit shares jump after OpenAI ChatGPT deal; BBC, May 17, 2024

 João da Silva, BBC; Reddit shares jump after OpenAI ChatGPT deal

"Shares in Reddit have jumped more than 10% after the firm said it had struck a partnership deal with artificial intelligence (AI) start-up OpenAI.

Under the agreement, the company behind the ChatGPT chatbot will get access to Reddit content, while it will also bring AI-powered features to the social media platform...

Meanwhile, Google announced a partnership in February which allows the technology giant to access Reddit data to train its AI models.

Both in the European Union and US, there are questions around whether it is copyright infringement to train AI tools on such content, or whether it falls under fair use and "temporary copying" exceptions."

Yet Another AI Copyright Suit Against OpenAI Underscores the Autonomy-Automaton Divide; American Enterprise Institute, May 17, 2024

Michael Rosen , American Enterprise Institute; Yet Another AI Copyright Suit Against OpenAI Underscores the Autonomy-Automaton Divide

"In addition to previous litigation brought against artificial intelligence firms by the New York Times Company,  an alliance of prominent authors, and a group of creative artists, eight newspapers filed a complaint in district court in New York late last month, alleging that OpenAI and Microsoft are infringing their copyrighted articles by training generative AI products on their content and by churning out text that too closely resembles the copyrighted works.

And just like in the predecessor suits, the current litigation highlights a fundamental divide over AI that we’ve explored in this space on numerous occasions: While the newspapers regard ChatGPT and its ilk as mere automatons that mindlessly perform whatever operations they’re programmed to perform, OpenAI and Microsoft present their technology as genuinely autonomous (i.e. transformative and capable of transcending their rote programming.)"

Friday, May 17, 2024

Reed Smith Names AI Chief, Plans for Faster Deals and IP Work; Bloomberg Law, May 15, 2024

Roy Strom, Bloomberg Law; Reed Smith Names AI Chief, Plans for Faster Deals and IP Work

"Richard Robbins is about a week into his role as Reed Smith’s first director of applied artificial intelligence, but—in a sign of how quickly the area is developing—there’s already a lot on his plate.

The law firm is working on at least eight generative artificial intelligence projects, ranging from testing new products to developing AI-fueled workflows for entire practice areas. Robbins, who joined the firm from legal tech company Epiq, will help usher those projects from the test phase to broader rollouts.

“It has been everybody’s part-time work, but we knew that this is so serious,” David Cunningham, Reed Smith’s chief innovation officer, said in an interview. “Just like we have heads of our data insights, our product design team, and our lab, we really needed to have an equal if not greater focus on how we apply AI and data science to the firm."

Law firms have been trying to develop strategies for how to invest in AI and where to deploy it to make their work more efficient."

Internet Archive must face record label copyright claims, judge rules; Reuters, May 16, 2024

 , Reuters; Internet Archive must face record label copyright claims, judge rules

"A California federal court has denied the Internet Archive's bid to dismiss part of a copyright case brought by a group of major record labels including Universal Music Group and Sony Music  over its program for digitizing and streaming vintage vinyl records."

Thursday, May 16, 2024

AI can make up songs now, but who owns the copyright? The answer is complicated; The Conversation, May 13, 2024

Lecturer in Law, University of New England , The Conversation; ; AI can make up songs now, but who owns the copyright? The answer is complicated

"With the rapid development of this technology, it is timely to debate whether a similar right of publicity should be introduced in Australia. If so, it would help to safeguard the identity and performance rights of all Australians and also protect against potential AI voice crimes."

Troy professor copyrights creative writing style; News4, May 15, 2024

 News4; Troy professor copyrights creative writing style

"Business professor Dr. Rodger Morrison now has a copyright for a tokenization regarding artificial intelligence."

Wednesday, May 15, 2024

The Generative AI Copyright Disclosure Act of 2024: Balancing Innovation and IP Rights; The National Law Review, May 13, 2024

 Danner Kline of Bradley Arant Boult Cummings LLP, The National Law Review; The Generative AI Copyright Disclosure Act of 2024: Balancing Innovation and IP Rights

"As generative AI systems become increasingly sophisticated and widespread, concerns around the use of copyrighted works in their training data continue to intensify. The proposed Generative AI Copyright Disclosure Act of 2024 attempts to address this unease by introducing new transparency requirements for AI developers.

The Bill’s Purpose and Requirements

The primary goal of the bill is to ensure that copyright owners have visibility into whether their intellectual property is being used to train generative AI models. If enacted, the law would require companies to submit notices to the U.S. Copyright Office detailing the copyrighted works used in their AI training datasets. These notices would need to be filed within 30 days before or after the public release of a generative AI system.

The Copyright Office would then maintain a public database of these notices, allowing creators to search and see if their works have been included. The hope is that this transparency will help copyright holders make more informed decisions about licensing their IP and seeking compensation where appropriate."

Intellectual property: Protecting traditional knowledge from Western plunder; Frontine, May 15, 2024

 DEUTSCHE WELLE, Frontline; Intellectual property: Protecting traditional knowledge from Western plunder

"Stopping the loss of heritage and knowledge

“The problem? When a patent for traditional knowledge is granted to a third party, that party formally becomes the owner of such knowledge,” said Sattigeri. “The nation loses its heritage and its own traditional knowledge.” But now, that could be changing. In May 2024, WIPO’s 193 member states will meet and potentially ratify the first step of a legal instrument aimed at creating greater protections for these assets.

WIPO has broken them down into three areas seen as vulnerable under the current system: genetic resources, traditional knowledge, and traditional cultural expression. Genetic resources are biological materials like plants and animals that contain genetic information, while traditional knowledge encompasses generational wisdom within communities, which is usually passed down orally. This could include knowledge about biodiversity, food, agriculture, healthcare, and more. Traditional cultural expression includes artistic creations reflecting a group’s heritage and identity, like music, art, and design.

“It changes the classic understanding of intellectual property,” said Dornis. “It might break the system that [says that] many things are unprotected.”"

Tuesday, May 14, 2024

AI Challenges, Freedom to Read Top AAP Annual Meeting Discussions; Publishers Weekly, May 13, 2024

 Jim Milliot , Publishers Weekly; AI Challenges, Freedom to Read Top AAP Annual Meeting Discussions

"The search for methods of reining in technology companies’ unauthorized copying of copyrighted materials to build generative AI models was the primary theme of this year's annual meeting of the Association of American Publishers, held May 9 over Zoom...

“To protect society, we will need a forward-thinking scheme of legal rules and enforcement authority across numerous jurisdictions and disciplines—not only intellectual property, but also national security, trade, privacy, consumer protection, and human rights, to name a few,” Pallante said. “And we will need ethical conduct.”...

Newton-Rex began in the generative AI space in 2010, and now leads the Fairly Trained, which launched in January as a nonprofit that seeks to certify AI companies that don't train models on copyrighted work without creators’ consent (Pallante is an advisor for the company.) He founded the nonprofit after leaving a tech company, Stability, that declined to use a licensing model to get permission to use copyrighted materials in training. Stability, Newton-Rex said, “argues that you can train on whatever you want. And it's a fair use in the United States, and I think this is not only incorrect, but I think it's ethically unforgivable. And I think we have to fight it with everything we have.”

“The old rules of copyright are gone,” said Maria Ressa, cofounder of the online news company Rappler and winner of the 2021 Nobel Peace Prize, in her keynote. “We are literally standing on the rubble of the world that was. If we don’t recognize it, we can’t rebuild it.”

Ressa added that, in a social media world drowning in misinformation and manipulation, “it is crucial that we get back to facts.” Messa advised publishers to “hold the line” in protecting their IP, and to continue to defend the importance of truth: “You cannot have rule of law if you do not have integrity of facts.”"

Monday, May 13, 2024

Supreme Court rules for producer against music giant Warner in copyright case; Boing Boing, May 13, 2024

  , Boing Boing; Supreme Court rules for producer against music giant Warner in copyright case

"The U.S. Supreme Court ruled that there is no time limit for recovering monetary damages in copyright cases filed before the statue of limitations. The case "turned on whether copyright damages are limited to the period of infringement that occurred during the statute of limitations or whether it could also include instances of infringement from before the statute of limitations period."

The 6-3 ruling, authored by liberal Justice Elena Kagan, affirmed a lower court's decision that favored producer Sherman Nealy, who sued a Warner subsidiary and others in Florida federal court in 2018."

Thursday, May 2, 2024

How One Author Pushed the Limits of AI Copyright; Wired, April 17, 2024

 , Wired; How One Author Pushed the Limits of AI Copyright

"The novel draws from Shupe’s eventful life, including her advocacy for more inclusive gender recognition. Its registration provides a glimpse of how the USCO is grappling with artificial intelligence, especially as more people incorporate AI tools into creative work. It is among the first creative works to receive a copyright for the arrangement of AI-generated text.

“We’re seeing the Copyright Office struggling with where to draw the line,” intellectual property lawyer Erica Van Loon, a partner at Nixon Peabody, says. Shupe’s case highlights some of the nuances of that struggle—because the approval of her registration comes with a significant caveat.

The USCO’s notice granting Shupe copyright registration of her book does not recognize her as author of the whole text as is conventional for written works. Instead she is considered the author of the “selection, coordination, and arrangement of text generated by artificial intelligence.” This means no one can copy the book without permission, but the actual sentences and paragraphs themselves are not copyrighted and could theoretically be rearranged and republished as a different book.

The agency backdated the copyright registration to October 10, the day that Shupe originally attempted to register her work. It declined to comment on this story. “The Copyright Office does not comment on specific copyright registrations or pending applications for registration,” Nora Scheland, an agency spokesperson says. President Biden’s executive order on AI last fall asked the US Patent and Trademark Office to make recommendations on copyright and AI to the White House in consultation with the Copyright Office, including on the “scope of protection for works produced using AI.”

Wednesday, May 1, 2024

FTC Challenenges ‘junk’ patents held by 10 drugmakers, including for Novo Nordisk’s Ozempic; CNBC, April 30, 2024

 Annika Kim Constantino, CNBC; FTC Challenenges ‘junk’ patents held by 10 drugmakers, including for Novo Nordisk’s Ozempic

"Most top-selling medications are protected by dozens of patents covering various ingredients, manufacturing processes, and intellectual property. Generic drugmakers can only launch cheaper versions of a branded drug if the patents have expired or are successfully challenged in court.

“By filing bogus patent listings, pharma companies block competition and inflate the cost of prescription drugs, forcing Americans to pay sky-high prices for medicines they rely on,” FTC Chair Lina Khan said in a release. “By challenging junk patent filings, the FTC is fighting these illegal tactics and making sure that Americans can get timely access to innovative and affordable versions of the medicines they need.”

The FTC also notified the Food and Drug Administration about the challenges. The FDA manages patent listings for approved drugs on a document called the Orange Book.

The FTC first challenged dozens of branded drug patents last fall, leading three drugmakers to comply and delist their patents with the FDA. Five other companies did not. 

The Tuesday announcement expands the Biden administration’s effort to crack down on alleged patent abuses by the pharmaceutical industry. The FTC has argued that drugmakers are needlessly listing dozens of extra patents for branded medications to keep their drug prices high and stall generic competitors from entering the U.S. market."

Tuesday, April 30, 2024

Eight US newspapers sue OpenAI and Microsoft for copyright infringement; Associated Press via The Guardian, April 30, 2024

 Associated Press via The GuardianEight US newspapers sue OpenAI and Microsoft for copyright infringement

"A group of eight US newspapers is suing ChatGPT-maker OpenAI and Microsoft, alleging that the technology companies have been “purloining millions” of copyrighted news articles without permission or payment to train their artificial intelligence chatbots."

Friday, April 26, 2024

World IP Day: How the Copyright System Builds Our Common Future; Library of Congress Blogs: Copyright Creativity at Work, April 26, 2024

 Ashley Tucker, Library of Congress Blogs: Copyright Creativity at Work; World IP Day: How the Copyright System Builds Our Common Future

"The following is a guest blog post by Miriam Lord, Associate Register of Copyrights and Director of Public Information and Education. 

Each year on April 26, the U.S. Copyright Office joins intellectual property organizations around the world in celebrating World Intellectual Property Day. This year’s theme, set by the World Intellectual Property Organization (WIPO), is “IP and the Sustainable Development Goals: Building Our Common Future with Innovation and Creativity.” Established by the United Nations, the 17 Sustainable Development Goals (SDGs) serve as a “shared blueprint for peace and prosperity for people and the planet, now and into the future.”

The copyright system is a powerful mechanism for ensuring society’s wealth of culture and knowledge. The system also helps creators sustain themselves by granting them certain exclusive rights over their works. In the United States, this concept is so central that it is enshrined in our Constitution.

“Congress shall have Power . . . To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”

-United States Constitution, Article I, Section 8"

Op-Ed: AI’s Most Pressing Ethics Problem; Columbia Journalism Review, April 23, 2024

 ANIKA COLLIER NAVAROLI, Columbia Journalism Review; Op-Ed: AI’s Most Pressing Ethics Problem

"I believe that, now more than ever, it’s time for people to organize and demand that AI companies pause their advance toward deploying more powerful systems and work to fix the technology’s current failures. While it may seem like a far-fetched idea, in February, Google decided to suspend its AI chatbot after it was enveloped in a public scandal. And just last month, in the wake of reporting about a rise in scams using the cloned voices of loved ones to solicit ransom, OpenAI announced it would not be releasing its new AI voice generator, citing its “potential for synthetic voice misuse.”

But I believe that society can’t just rely on the promises of American tech companies that have a history of putting profits and power above people. That’s why I argue that Congress needs to create an agency to regulate the industry. In the realm of AI, this agency should address potential harms by prohibiting the use of synthetic data and by requiring companies to audit and clean the original training data being used by their systems.

AI is now an omnipresent part of our lives. If we pause to fix the mistakes of the past and create new ethical guidelines and guardrails, it doesn’t have to become an existential threat to our future."