Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, February 20, 2025

AI and Copyright: Expanding Copyright Hurts Everyone—Here’s What to Do Instead; Electronic Frontier Foundation (EFF), February 19, 2025

TORI NOBLE, Electronic Frontier Foundation (EFF); AI and Copyright: Expanding Copyright Hurts Everyone—Here’s What to Do Instead


[Kip Currier: No, not everyone. Not requiring Big Tech to figure out a way to fairly license or get permission to use the copyrighted works of creators unjustly advantages these deep pocketed corporations. It also inequitably disadvantages the economic and creative interests of the human beings who labor to create copyrightable content -- authors, songwriters, visual artists, and many others.

The tell is that many of these same Big Tech companies are only too willing to file copyright infringement lawsuits against anyone whom they allege is infringing their AI content to create competing products and services.]


[Excerpt]


"Threats to Socially Valuable Research and Innovation 

Requiring researchers to license fair uses of AI training data could make socially valuable research based on machine learning (ML) and even text and data mining (TDM) prohibitively complicated and expensive, if not impossible. Researchers have relied on fair use to conduct TDM research for a decade, leading to important advancements in myriad fields. However, licensing the vast quantity of works that high-quality TDM research requires is frequently cost-prohibitive and practically infeasible.  

Fair use protects ML and TDM research for good reason. Without fair use, copyright would hinder important scientific advancements that benefit all of us. Empirical studies back this up: research using TDM methodologies are more common in countries that protect TDM research from copyright control; in countries that don’t, copyright restrictions stymie beneficial research. It’s easy to see why: it would be impossible to identify and negotiate with millions of different copyright owners to analyze, say, text from the internet."

Monday, February 17, 2025

Copyright battles loom over artists and AI; Financial Times, February 16, 2025

louise.lucas@ft.com, Financial Times ; Copyright battles loom over artists and AI

"Artists are the latest creative industry to gripe about the exploitative nature of artificial intelligence. More than 3,000 have written to protest against plans by Christie’s to auction art created using AI."

Wednesday, February 5, 2025

Google lifts its ban on using AI for weapons; BBC, February 5, 2025

 Lucy Hooker & Chris Vallance, BBC; Google lifts its ban on using AI for weapons

"Google's parent company has ditched a longstanding principle and lifted a ban on artificial intelligence (AI) being used for developing weapons and surveillance tools.

Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm".

In a blog post Google defended the change, arguing that businesses and democratic governments needed to work together on AI that "supports national security".

Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems."

Thursday, January 30, 2025

Vatican says AI has 'shadow of evil,' calls for close oversight; Reuters, January 28, 2025

 , Reuters ; Vatican says AI has 'shadow of evil,' calls for close oversight

"The Vatican on Tuesday called for governments to keep a close eye on the development of artificial intelligence, warning the technology contained "the shadow of evil" in its ability to spread misinformation.

"AI generated fake media can gradually undermine the foundations of society," said a new text on the ethics of AI, written by two Vatican departments and approved by Pope Francis.

"This issue requires careful regulation, as misinformation—especially through AI-controlled or influenced media—can spread unintentionally, fuelling political polarization and social unrest," it said."

Wednesday, January 29, 2025

Copyright Office Releases Part 2 of Artificial Intelligence Report; U.S. Copyright Office, Issue No. 1060, January 29, 2025

 U.S. Copyright Office, Issue No. 1060Copyright Office Releases Part 2 of Artificial Intelligence Report

"Today, the U.S. Copyright Office is releasing Part 2 of its Report on the legal and policy issues related to copyright and artificial intelligence (AI). This Part of the Report addresses the copyrightability of outputs created using generative AI. The Office affirms that existing principles of copyright law are flexible enough to apply to this new technology, as they have applied to technological innovations in the past. It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements. This can include situations where a human-authored work is perceptible in an AI output, or a human makes creative arrangements or modifications of the output, but not the mere provision of prompts. The Office confirms that the use of AI to assist in the process of creation or the inclusion of AI-generated material in a larger human-generated work does not bar copyrightability. It also finds that the case has not been made for changes to existing law to provide additional protection for AI-generated outputs.

“After considering the extensive public comments and the current state of technological development, our conclusions turn on the centrality of human creativity to copyright,” said Shira Perlmutter, Register of Copyrights and Director of the U.S. Copyright Office. “Where that creativity is expressed through the use of AI systems, it continues to enjoy protection. Extending protection to material whose expressive elements are determined by a machine, however, would undermine rather than further the constitutional goals of copyright.”

In early 2023, the Copyright Office announced a broad initiative to explore the intersection of copyright and AI. Since then, the Office has issued registration guidance for works incorporating AI-generated content, hosted public listening sessions and webinars, met with experts and stakeholders, published a notice of inquiry seeking input from the public, and reviewed more than 10,000 responsive comments, which served to inform these conclusions.

The Report is being released in three Parts. Part 1 was published on July 31, 2024, and recommended federal legislation to respond to the unauthorized distribution of digital replicas that realistically but falsely depict an individual. The final, forthcoming Part 3 will address the legal implications of training AI models on copyrighted works, including licensing considerations and the allocation of any potential liability.

As announced last year, the Office also plans to supplement its March 2023 registration guidance and update the relevant sections of the Compendium of U.S. Copyright Office Practices.

For more information about the Copyright Office’s AI Initiative, please visit the website."

Saturday, January 25, 2025

Copyright Under Siege: How Big Tech Uses AI And China To Exploit Creators; Virginie Berger, January 25, 2025

Virginie Berger

, Forbes; Copyright Under Siege: How Big Tech Uses AI And China To Exploit Creators

"Generative AI is reshaping creativity in ways that highlight a troubling paradox: while touted as a force for innovation, it increasingly relies on exploiting copyrighted materials, songs, books, and artworks, without consent or compensation. This transformation underscores the growing conflict between technological progress and the preservation of artistic integrity. At the heart of the issue lies a troubling paradox: while companies like OpenAI and Google promote AI as a force for innovation, their reliance on scraping copyrighted materials, songs, books, and artworks, undermines the very creativity they claim to enhance. This exploitation is often disguised as progress or justified as necessary for global competitiveness, particularly in the AI race against China. However, these claims mask a deeper reality: the consolidation of power by Big Tech at the expense of creators. As the balance of influence shifts, those who drive culture and innovation are increasingly marginalized, raising urgent questions about the future of intellectual property and creative industries."

Paul McCartney: Don't let AI rip off artists; BBC, January 25, 2025

Laura Kuenssberg, BBC; Paul McCartney: Don't let AI rip off artists

"Sir Paul McCartney has told the BBC proposed changes to copyright law could allow "rip off" technology that might make it impossible for musicians and artists to make a living.

The government is considering an overhaul of the law that would allow AI developers to use creators' content on the internet to help develop their models, unless the rights holders opt out.

In a rare interview for Sunday with Laura Kuenssberg, Sir Paul said "when we were kids in Liverpool, we found a job that we loved, but it also paid the bills", warning the proposals could remove the incentive for writers and artists and result in a "loss of creativity". 

The government said it aimed to deliver legal certainty through a copyright regime that provided creators with "real control" and transparency."

Monday, January 20, 2025

Is the law playing catch-up with AI?; Harvard Law Today, January 16, 2025

Harvard Law Today; Is the law playing catch-up with AI?

"Harvard Law Today: Why was the Artificial Intelligence and Intellectual Property Law Conference in November convened? Why is it important to be talking about AI and IP right now?

William Lee: In the past, this event has been much more focused on the specifics of the law and comparisons of the different approaches across jurisdictions. This year, the conference addressed AI more generally with moderators and panelists from a wider variety of fields including homeland security, life sciences, technological development, non-profit advocacy, and even ethics. I think it was an introduction into AI for many of the people in the room and who better to provide that introduction than [Harvard Law School Professor] Jonathan Zittrain ’95. Matt Ferraro, senior counselor for cybersecurity and emerging technology to the secretary of Homeland Security and executive director of the Artificial Intelligence Safety and Security Board, led a panel primarily of industry leaders, explaining the capabilities and trajectory of AI technology. Then, Iain Cunningham from NVIDIA chaired an excellent panel mostly composed of academics and people from industry discussing how IP law and AI interact. We also had input from experts on the AI and IP relationship in jurisdictions across the globe, including Europe, the UK, and Africa, on a panel moderated by Terry Fisher that was particularly illuminating. Then, we closed with a judges panel where a group of five Federal Circuit and two District Court judges offered views on AI issues as well as IP more broadly.

Louis Tompros: IP law has historically, and inherently, operated at the intersection of law and fast-moving technology. Artificial Intelligence is currently where technology is moving the fastest and where the law has the most ground to cover in order to keep pace. This conference was designed to educate folks about AI technology and the various IP law approaches taken in the United States and around the world, and to help forecast how protections for creative and useful ideas will function in the context of these innovative systems. We try to make the IP conference as broadly appealing and relevant to the groups of constituents that are interested in participating, that is, people within the legal community, the business community, and the academic community, including Harvard Law School students. This year was the first time ever that the conference was fully subscribed via pre-registration which is, I think, a great testament to the level and breadth of interest. You can tell that we got it right precisely because of the incredible interest in this year’s event.

HLT: Throughout history, innovations have compelled IP law to adjust and evolve to account for new technology, like the radio, the television, and the internet. Is AI different?

Tompros: The law can’t possibly stay ahead. It will always lag a bit behind. Honestly, that’s part of the fun of IP law because the law is perpetually having to evolve by necessity to keep pace with rapidly evolving aspects of technology. I don’t think AI is different in kind from previous technological revolutions that affected the law, but I do think it is quite possibly different in scale. The pace of the development of the technology here is so accelerated that the speed at which technological advances are coming makes it even harder for the already trailing legal system to catch up. That leads to some interesting possibilities, but it also leads to some serious challenges. Ultimately, it demands creative and innovative thinking in the design of legal structures established to try to manage it."

Sunday, January 19, 2025

Congress Must Change Copyright Law for AI | Opinion; Newsweek, January 16, 2025

 Assistant Professor of Business Law, Georgia College and State University , Newsweek; Congress Must Change Copyright Law for AI | Opinion

"Luckily, the Constitution points the way forward. In Article I, Section 8, Congress is explicitly empowered "to promote the Progress of Science" through copyright law. That is to say, the power to create copyrights isn't just about protecting content creators, it's also about advancing human knowledge and innovation.

When the Founders gave Congress this power, they couldn't have imagined artificial intelligence, but they clearly understood that intellectual property laws would need to evolve to promote scientific progress. Congress therefore not only has the authority to adapt copyright law for the AI age, it has the duty to ensure our intellectual property framework promotes rather than hinders technological progress.

Consider what's at risk with inaction...

While American companies are struggling with copyright constraints, China is racing ahead with AI development, unencumbered by such concerns. The Chinese Communist Party has made it clear that they view AI supremacy as a key strategic goal, and they're not going to let intellectual property rights stand in their way.

The choice before us is clear, we can either reform our copyright laws to enable responsible AI development at home or we can watch as the future of AI is shaped by authoritarian powers abroad. The cost of inaction isn't just measured in lost innovation or economic opportunity, it is measured in our diminishing ability to ensure AI develops in alignment with democratic values and a respect for human rights.

The ideal solution here isn't to abandon copyright protection entirely, but to craft a careful exemption for AI training. This could even include provisions for compensating content creators through a mandated licensing framework or revenue-sharing system, ensuring that AI companies can access the data they need while creators can still benefit from and be credited for their work's use in training these models.

Critics will argue that this represents a taking from creators for the benefit of tech companies, but this misses the broader picture. The benefits of AI development flow not just to tech companies but to society as a whole. We should recognize that allowing AI models to learn from human knowledge serves a crucial public good, one we're at risk of losing if Congress doesn't act."

Thursday, January 16, 2025

Biden bids farewell with dark warning for America: the oligarchs are coming; The Guardian, January 15, 2025

  in Washington , The Guardian; Biden bids farewell with dark warning for America: the oligarchs are coming

"The primetime speech did not mention Donald Trump by name. Instead it will be remembered for its dark, ominous warning about something wider and deeper of which Trump is a symptom.

“Today, an oligarchy is taking shape in America of extreme wealth, power, and influence that literally threatens our entire democracy, our basic rights and freedom and a fair shot for everyone to get ahead,” Biden said.

The word “oligarchy” comes from the Greek words meaning rule (arche) by the few (oligos). Some have argued that the dominant political divide in America is no longer between left and right, but between democracy and oligarchy, as power becomes concentrated in the hands of a few. The wealthiest 1% of Americans now has more wealth than the bottom 90% combined.

The trend did not start with Trump but he is set to accelerate it. The self-styled working-class hero has picked the richest cabinet in history, including 13 billionaires, surrounding himself with the very elite he claims to oppose. Elon Musk, the world’s richest man, has become a key adviser. Tech titans Musk, Jeff Bezos and Mark Zuckerberg – collectively worth a trillion dollars – will be sitting at his inauguration on Monday.

Invoking former president Dwight Eisenhower’s farewell address in January 1961 that warned against the rise of a military-industrial complex, Biden said: “Six decades later, I’m equally concerned about the potential rise of a tech industrial complex. It could pose real dangers for our country as well. Americans are being buried under an avalanche of misinformation and disinformation, enabling the abuse of power.”

In an acknowledgement of news deserts and layoffs at venerable institutions such as the Washington Post, Biden added starkly: “The free press is crumbling. Editors are disappearing. Social media is giving up on fact checking. Truth is smothered by lies, told for power and for profit. We must hold the social platforms accountable, to protect our children, our families and our very democracy from the abuse of power.”

Zuckerberg’s recent decision to abandon factcheckers on Facebook, and Musk’s weaponisation of X in favour of far-right movements including Maga, was surely uppermost in Biden’s mind. Trust in the old media is breaking down as people turn to a fragmented new ecosystem. It has all happened with disorienting speed."

Wednesday, January 15, 2025

Meta Lawyer Lemley Quits AI Case Citing Zuckerberg 'Descent'; Bloomberg Law, January 14, 2026

, Bloomberg Law; Meta Lawyer Lemley Quits AI Case Citing Zuckerberg 'Descent'

"California attorney Mark Lemley dropped Meta Platforms Inc. as a client in a high-profile copyright case because of CEO Mark Zuckerberg’s “descent into toxic masculinity and Neo-Nazi madness,” the Stanford University professor said on LinkedIn."

Tuesday, January 14, 2025

USPTO announces new Artificial Intelligence Strategy to empower responsible implementation of innovation; United States Patent and Trademark Office (USPTO), January 14, 2025

United States Patent and Trademark Office (USPTO) ; USPTO announces new Artificial Intelligence Strategy to empower responsible implementation of innovation 

"AI Strategy outlines how the USPTO will address AI's impact across IP policy, agency operations, and the broader innovation ecosystem  

WASHINGTON—The U.S. Patent and Trademark Office (USPTO) announced a new Artificial Intelligence (AI) Strategy to guide the agency’s efforts toward fulfilling the potential of AI within USPTO operations and across the intellectual property (IP) ecosystem. The Strategy offers a vision for how the USPTO can foster responsible and inclusive AI innovation, harness AI to support the agency’s mission, and advance a positive future for AI to ensure that the country maintains its leadership in innovation. 

“We have a responsibility to promote, empower, and protect innovation,” said Derrick Brent, Acting Under Secretary of Commerce for Intellectual Property and Acting Director of the USPTO. “Developing a strategy to unleash the power of AI while mitigating risks provides a framework to advance innovation and intellectual property.”  

The strategy aims to achieve the USPTO’s AI vision and mission through five focus areas which include: 

  1. Advance the development of IP policies that promote inclusive AI innovation and creativity. 
  2. Build best-in-class AI capabilities by investing in computational infrastructure, data resources, and business-driven product development. 
  3. Promote the responsible use of AI within the USPTO and across the broader innovation ecosystem.
  4. Develop AI expertise within the USPTO’s workforce.
  5. Collaborate with other U.S. government agencies, international partners, and the public on shared AI priorities.

The USPTO and our sister agencies within the Department of Commerce, as well as the U.S. Copyright Office, are providing critical guidance and recommendations to advance AI-driven innovation and creativity. In 2022, the USPTO created the AI and Emerging Technology (ET) Partnership, which has worked closely with the AI/ET community to gather public feedback through a series of sessions on topics related to AI and innovation, biotech, and intellectual property (IP) policy. Since its 2022 launch, more than 6,000 stakeholders have engaged with us on these critical issues. In additionthe USPTO collaborates across government to advance American leadership in AI by promoting innovation and competition as set forth in the Biden-Harris Administration’s landmark October 2023 AI Executive Order. 

The full text of the AI Strategy can be found on the AI Strategy webpageAdditionalinformation on AI, including USPTO guidance and more on USPTO’s AI/ET Partnership, can be found on our AI webpage. "

Wednesday, January 8, 2025

3Blue1Brown copyright takedown blunder by AI biz blamed on human error; The Register, January 7, 2025

Thomas Claburn , The Register; 

3Blue1Brown copyright takedown blunder by AI biz blamed on human error

"The errant copyright takedown of a popular video explaining Bitcoin, from Grant Sanderson's smash-hit YouTube channel 3Blue1Brown, has been reversed, with human error blamed.

The brand protection company responsible for the unwarranted takedown, ChainPatrol.io, also acknowledged that the legal demand was a mistake, and has been in communication with Sanderson about its efforts to return the video to YouTube. Sanderson – who has 6.8 million subscribers on YouTube, and whose in-depth videos mainly on mathematics and science have been viewed more than 600 million times – raised the alarm in a social media post on Monday.

He said he had just learned that YouTube had removed a 2017-era video he had made and applied a copyright strike – three of which will get an account terminated – despite the fact that he had only used his own content in his video."

Monday, January 6, 2025

At the Intersection of A.I. and Spirituality; The New York Times, January 3, 2025

 , The New York Times; At the Intersection of A.I. and Spirituality

"For centuries, new technologies have changed the ways people worship, from the radio in the 1920s to television sets in the 1950s and the internet in the 1990s. Some proponents of A.I. in religious spaces have gone back even further, comparing A.I.’s potential — and fears of it — to the invention of the printing press in the 15th century.

Religious leaders have used A.I. to translate their livestreamed sermons into different languages in real time, blasting them out to international audiences. Others have compared chatbots trained on tens of thousands of pages of Scripture to a fleet of newly trained seminary students, able to pull excerpts about certain topics nearly instantaneously.

But the ethical questions around using generative A.I. for religious tasks have become more complicated as the technology has improved, religious leaders say. While most agree that using A.I. for tasks like research or marketing is acceptable, other uses for the technology, like sermon writing, are seen by some as a step too far."

Friday, January 3, 2025

U.S. Copyright Office to Begin Issuing Further AI Guidance in January 2025; The National Law Review, January 2, 2025

 John Hines of The Sedona Conference  , The National Law Review; U.S. Copyright Office to Begin Issuing Further AI Guidance in January 2025

"Parts 2 and 3, which have not yet been released, will be of heightened interest to content creators and to individuals and businesses involved in developing and deploying AI technologies. Ultimate regulatory and legislative determinations could materially recalibrate the scope of ownership and protection afforded to works of authorship, and the stakes are extremely high...

Part 2 of the report, which the Copyright Office expects to publish “after the New Year Holiday,” will address the copyrightability of AI-generated works, and more specifically, how the nature and degree of such use affects copyrightability and registrability. Current law is clear that to be copyrightable, a work must be created by a human. E.g., Thaler v. Perlmutter, 678 F.Supp. 140 (D.DC 2023), on appeal. However assistive tools are used in virtually all creation, from pencils to cameras to photo-editing software programs. In the context of registrability, the Copyright Office offered the following distinction in its March 2023 guidance: “[W]hether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” In Part 2, the Copyright Office will have an additional opportunity to explore these and related issues – this time with the advantage of the many comments offered through the Notice of Inquiry process.

Part 3 of the report, which the Copyright Office anticipates releasing “in the first quarter of 2025,” will focus on issues associated with training data. AI models, depending on their size and scope, may train on millions of documents—many of which are copyrighted or copyrightable— acquired from the Internet or through acquisition of various robust databases. Users of “trained” AI technologies will typically input written prompts to generate written content or images, depending on the model (Sora is now available to generate video). The output is essentially a prediction based on a correlation of values in the model (extracted from the training data) and values that are derived from the user prompts.

Numerous lawsuits, perhaps most notably the case that The New York Times filed against Microsoft and OpenAI, have alleged that the use of data to train AI models constitutes copyright infringement. In many cases there may be little question of copying in the course of uploading data to train the models. Among a variety of issues, a core common issue will be whether the use of the data for training purposes is fair use. Content creators, of course, point to the fact that they have built their livelihoods and/or businesses around their creations and that they should be compensated for what is a violation of their exclusive rights."

Monday, December 30, 2024

Key IP Issues for the Next President and Congress to Tackle: AI and Patent Subject Matter Eligibility; IP Watchdog, December 29, 2024

RYAN J. MALLOY, IP Watchdog; Key IP Issues for the Next President and Congress to Tackle: AI and Patent Subject Matter Eligibility

"The debates surrounding the 2024 election focused on “hot button” issues like abortion, immigration, and transgender rights. But several important IP issues also loom over the next administration and Congress. These issues include AI-generated deepfakes, the use of copyrighted works for AI training, the patentability of AI-assisted inventions, and patent subject matter eligibility more generally. We might see President Trump and the 119th Congress tackle some or all of these issues in the next term."

Sunday, December 29, 2024

Copyright on the Calendar: Top Cases of 2024 and What to Watch in 2025; IP Watchdog, December 20, 2024

YAN SUN & JEFFREY HAJDIN & JOHANNA SCHMITT , IP Watchdog; Copyright on the Calendar: Top Cases of 2024 and What to Watch in 2025

"Copyright law provided an arena for some of the most interesting legal battles in 2024. We review some highlights from 2024 below, as well as some cases to watch in 2025."

Saturday, December 28, 2024

Overcoming AI’s Nagging Trust And Ethics Issues; Forbes, December 28, 2024

 Joe McKendrick, Forbes ; Overcoming AI’s Nagging Trust And Ethics Issues

"Trust and ethics in AI is what is making business leaders nervous. For example, at least 72% of executives responding to a recent surveyfrom the IBM Institute for Business Value say they “are willing to forgo generative AI benefits due to ethical concerns.” In addition, more than half (56%) indicate they are delaying major investments in generative AI until there is clarity on AI standards and regulations...

"Today, guardrails are a growing area of practice for the AI community given the stochastic nature of these models,” said Ross. “Guardrails can be employed for virtually any area of decisioning, from examining bias to preventing the leakage of sensitive data."...

The situation is not likely to change soon, Jeremy Rambarran, professor at Touro University Graduate School, pointed out. “Although the output that's being generated may be unique, depending on how the output is being presented, there's always a chance that part of the results may not be entirely accurate. This will eventually change down the road as algorithms are enhanced and could eventually be updated in an automated manner.”...

How can AI be best directed to be ethical and trustworthy? Compliance requirements, of course, will be a major driver of AI trust in the future, said Rambarran. “We need to ensure that AI-driven processes comply with ethical guidelines, legal regulations, and industry standards. Humans should be aware of the ethical implications of AI decisions and be ready to intervene when ethical concerns arise.”

Friday, December 27, 2024

While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas: 2024 in Review; Electronic Frontier Foundation (EFF), December 27, 2024

CORYNNE MCSHERRY, Electronic Frontier Foundation (EFF) ; While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas: 2024 in Review

"These state laws are a done deal, so we’ll just have to see how they play out. At the federal level, however, we still have a chance to steer policymakers in the right direction.  

We get it–everyone should be able to prevent unfair and deceptive commercial exploitation of their personas. But expanded property rights are not the way to do it. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation."

Friday, December 20, 2024

Conclusion of Copyright Office’s Report on Artificial Intelligence Delayed Until 2025; The National Law Review, December 19, 2024

Daniel J. Lass of Robinson & Cole LLP , The National Law Review; Conclusion of Copyright Office’s Report on Artificial Intelligence Delayed Until 2025

"This week, Director Shira Perlmutter indicated that the publication of part two of the U.S. Copyright Office’s three-part report on copyright issues raised by artificial intelligence (AI) would be further delayed. In her letter to the ranking members of the Senate Subcommittee on Intellectual Property and the House Subcommittee on Courts, Intellectual Property, and the Internet, Director Perlmutter indicated that although substantial progress had been made, the Office will not publish part two by the end of 2024 and now expects publication to occur in early 2025.

Part two of the report will describe the copyrightability of generative AI outputs and will build on part one of the report on digital replicas. Following the publication of part two, Director Perlmutter indicated that the third and final part would be published in the first quarter of 2025. Part three will relate to “analyzing the legal issues related to the ingestion of copyrighted works to train AI models, including licensing considerations and the allocation of potential liability.”"