Sunday, August 4, 2024

Meta in Talks to Use Voices of Judi Dench, Awkwafina and Others for A.I.; The New York Times, August 2, 2024

Mike Isaac and  , The New York Times; Meta in Talks to Use Voices of Judi Dench, Awkwafina and Others for A.I.

"Meta is in discussions with Awkwafina, Judi Dench and other actors and influencers for the right to incorporate their voices into a digital assistant product called MetaAI, according to three people with knowledge of the talks, as the company pushes to build more products that feature artificial intelligence.

Apart from Ms. Dench and Awkwafina, Meta is in talks with the comedian Keegan-Michael Key and other celebrities, said the people, who spoke on the condition of anonymity because the discussions are private. They added that all of Hollywood’s top talent agencies were involved in negotiations with the tech giant." 

Saturday, August 3, 2024

AI is complicating plagiarism. How should scientists respond?; Nature, July 30, 2024

 Diana Kwon , Nature; AI is complicating plagiarism. How should scientists respond?

"From accusations that led Harvard University’s president to resign in January, to revelations in February of plagiarized text in peer-review reports, the academic world has been roiled by cases of plagiarism this year.

But a bigger problem looms in scholarly writing. The rapid uptake of generative artificial intelligence (AI) tools — which create text in response to prompts — has raised questions about whether this constitutes plagiarism and under what circumstances it should be allowed. “There’s a whole spectrum of AI use, from completely human-written to completely AI-written — and in the middle, there’s this vast wasteland of confusion,” says Jonathan Bailey, a copyright and plagiarism consultant based in New Orleans, Louisiana.

Generative AI tools such as ChatGPT, which are based on algorithms known as large language models (LLMs), can save time, improve clarity and reduce language barriers. Many researchers now argue that they are permissible in some circumstances and that their use should be fully disclosed.

But such tools complicate an already fraught debate around the improper use of others’ work. LLMs are trained to generate text by digesting vast amounts of previously published writing. As a result, their use could result in something akin to plagiarism — if a researcher passes off the work of a machine as their own, for instance, or if a machine generates text that is very close to a person’s work without attributing the source. The tools can also be used to disguise deliberately plagiarized text, and any use of them is hard to spot. “Defining what we actually mean by academic dishonesty or plagiarism, and where the boundaries are, is going to be very, very difficult,” says Pete Cotton, an ecologist at the University of Plymouth, UK."

Thursday, August 1, 2024

AI Startup Suno Claims “Fair Use” Copyright Doctrine Allows Training On Major Recordings; Deadline, August 1, 2024

 Bruce Haring, Deadline; AI Startup Suno Claims “Fair Use” Copyright Doctrine Allows Training On Major Recordings

"Suno CEO and co-founder Mikey Shulman amplified that stance in a blog post today. “We train our models on medium- and high-quality music we can find on the open internet… Much of the open internet indeed contains copyrighted materials, and some of it is owned by major record labels.”

Schulman said such use is viewed by Suno as “early but promising progress. Major record labels see this vision as a threat to their business.” He added, “learning is not infringing. It never has been, and it is not now.”

Shulman also argued that training its AI model from data on the “open internet” is no different than a “kid writing their own rock songs after listening to the genre.” 

The RIAA responded “It’s a major concession of facts they spent months trying to hide and acknowledged only when forced by a lawsuit. Their industrial scale infringement does not qualify as ‘fair use’. There’s nothing fair about stealing an artist’s life’s work, extracting its core value, and repackaging it to compete directly with the originals…Their vision of the ‘future of music’ is apparently one in which fans will no longer enjoy music by their favorite artists because those artists can no longer earn a living.”"

How OpenAI is Looking to Beat the Growing Pool of Copyright Cases; The Fashion Law, July 23, 2024

 Aaron West, The Fashion Law; How OpenAI is Looking to Beat the Growing Pool of Copyright Cases

"The recent barrage of copyright infringement disputes that are being waged against OpenAI and Microsoft by major publishers, authors, and other plaintiffs continues to stack up, with various outcomes coming in early rounds from district courts. While most of the cases that the artificial intelligence (“AI”) company and its chief investor are facing are still in early stages, at least one major theme has started to emerge from the litigation: The two high-powered defendants are leaning on a defense that paints the plaintiffs’ claims as stemming from their manipulation of the AI-powered platforms at play – and thus, prompting allegedly unlikely and hypothetical outputs – and the harm they allege in their infringement cases as purely speculative."

Copyright Office tells Congress: ‘Urgent need’ to outlaw AI-powered impersonation; TechCrunch, July 31, 2024

 Devin Coldewey, TechCrunch; Copyright Office tells Congress: ‘Urgent need’ to outlaw AI-powered impersonation

"The U.S. Copyright Office has issued the first part of a report on how AI may affect its domain, and its first recommendation out of the gate is: we need a new law right away to define and combat AI-powered impersonation

“It has become clear that the distribution of unauthorized digital replicas poses a serious threat not only in the entertainment and political arenas but also for private citizens,” said the agency’s director Shira Perlmutter in a statement accompanying the report. “We believe there is an urgent need for effective nationwide protection against the harms that can be caused to reputations and livelihoods.”

The report itself, part one of several to come, focuses on this timely aspect of AI and intellectual property, which as a concept encompasses your right to control your own identity."

Wednesday, July 31, 2024

Who Guards AI Ethics? Ending The Blame Game; Forbes, July 30, 2024

Gary Drenik, Forbes; Who Guards AI Ethics? Ending The Blame Game

"As AI becomes increasingly sophisticated and ubiquitous, a critical question has emerged: Who bears the responsibility for ensuring its ethical development and implementation?

According to a recent survey by Prosper Insights & Analytics, about 37% of US adults agree AI solutions need human oversight. However, corporations and governments are engaging in a frustrating game of hot potato, each pointing fingers and shirking accountability. This lack of clear responsibility poses significant risks.

On one hand, excessive government control and overregulation could stifle innovation, hindering AI's progress and potential to solve complex problems. Conversely, unchecked corporate influence and a lack of proper oversight could result in an "AI Wild West," where profit-driven motives supersede ethical considerations. This could result in biased algorithms, privacy breaches and the exacerbation of social inequalities."

Copyright Office Releases Part 1 of Artificial Intelligence Report, Recommends Federal Digital Replica Law; U.S. Copyright Office, July 31, 2024

 U.S. Copyright Office; Copyright Office Releases Part 1 of Artificial Intelligence Report, Recommends Federal Digital Replica Law

"Today, the U.S. Copyright Office is releasing Part 1 of its Report on the legal and policy issues related to copyright and artificial intelligence (AI), addressing the topic of digital replicas. This Part of the Report responds to the proliferation of videos, images, or audio recordings that have been digitally created or manipulated to realistically but falsely depict an individual. Given the gaps in existing legal protections, the Office recommends that Congress enact a new federal law that protects all individuals from the knowing distribution of unauthorized digital replicas. The Office also offers recommendations on the elements to be included in crafting such a law. 

“I am pleased to begin sharing the results of our comprehensive study of AI and copyright, with this first set of recommendations to Congress. It has become clear that the distribution of unauthorized digital replicas poses a serious threat not only in the entertainment and political arenas but also for private citizens. We believe there is an urgent need for effective nationwide protection against the harms that can be caused to reputations and livelihoods,” said Shira Perlmutter, Register of Copyrights and Director of the U.S. Copyright Office. “We look forward to working with Congress as they consider our recommendations and evaluate future developments.”

In early 2023, the Copyright Office announced a broad initiative to explore the intersection of copyright and artificial intelligence. Since then, the Office has issued registration guidance for works incorporating AI-generated content, hosted public listening sessions and webinars, met with numerous experts and stakeholders, published a notice of inquiry seeking input from the public, and reviewed the more than 10,000 responsive comments.

The Report is being released in several Parts, beginning today. Forthcoming Parts will address the copyrightability of materials created in whole or in part by generative AI, the legal implications of training AI models on copyrighted works, licensing considerations, and the allocation of any potential liability. 

For more information about the Copyright Office’s AI Initiative, please visit the website."

Tuesday, July 30, 2024

An academic publisher has struck an AI data deal with Microsoft – without their authors’ knowledge; The Conversation, July 23, 2024

Lecturer in Law, University of New England , The Conversation; ; An academic publisher has struck an AI data deal with Microsoft – without their authors’ knowledge

"In May, a multibillion-dollar UK-based multinational called Informa announced in a trading update that it had signed a deal with Microsoft involving “access to advanced learning content and data, and a partnership to explore AI expert applications”. Informa is the parent company of Taylor & Francis, which publishes a wide range of academic and technical books and journals, so the data in question may include the content of these books and journals.

According to reports published last week, the authors of the content do not appear to have been asked or even informed about the deal. What’s more, they say they had no opportunity to opt out of the deal, and will not see any money from it...

The types of agreements being reached between academic publishers and AI companies have sparked bigger-picture concerns for many academics. Do we want scholarly research to be reduced to content for AI knowledge mining? There are no clear answers about the ethics and morals of such practices."

Monday, July 29, 2024

The COPIED Act Is an End Run around Copyright Law; Public Knowledge, July 24, 2024

Lisa Macpherson , Public Knowledge; The COPIED Act Is an End Run around Copyright Law

"Over the past week, there has been a flurry of activity related to the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act. While superficially focused on helping people understand when they are looking at content that has been created or altered using artificial intelligence (AI) tools, this overly broad bill makes an end run around copyright law and restricts how everyone – not just huge AI developers – can use copyrighted work as the basis of new creative expression. 

The COPIED Act was introduced in the Senate two weeks ago by Senators Maria Cantwell (D-WA, and Chair of the Commerce Committee); Marsha Blackburn (R-TN); and Martin Heinrich (D-NM). By the end of last week, we learned there may be a hearing and markup on the bill within days or weeks. The bill directs agency action on standards for detecting and labeling synthetic content; requires AI developers to allow the inclusion of these standards on content; and prohibits the use of such content to generate new content or train AI models without consent and compensation from creators. It allows for enforcement by the Federal Trade Commission and state attorneys general, and for private rights of action. 

We want to say unequivocally that this is the wrong bill, at the wrong time, from the wrong policymakers, to address complex questions of copyright and generative artificial intelligence."

Friday, July 26, 2024

Who Owns the Law? The Pro Codes Act’s Copyright Conundrum; The Federalist Society, July 26, 2024

Jill Jacobson , The Federalist Society;  Who Owns the Law? The Pro Codes Act’s Copyright Conundrum

"Voluntary consensus standards have become an essential but unseen part of everyday life. From ensuring your apartment building’s elevator is safe to enabling your router to get online, privately developed standards greatly impact modern life. A bill quietly percolating in Congress may impact this typically quiet intersection of law, policy, and regulation. The Protecting and Enhancing Public Access to Codes Act (“Pro Codes Act” (H.R. 1631)) permits standards development organizations (SDOs) to retain copyright protection when their standards are incorporated by reference into law, so long as they make a free version of the code available online. I take no position on this legislation and acknowledge the thoughtful arguments on both sides of this debate. I instead write to highlight how this below-the-radar legislation raises interesting and unintended intellectual property and takings issues with potentially far-reaching consequences.

The key players here are SDOs—typically private groups of industry experts who draft technical standards. Given the expertise that goes into drafting these standards, local, state, and federal governments often incorporate them into regulations by reference. Often, SDOs do not write standards for government use; many standards are authored for private sector guidance. Many SDOs derive significant revenues from licensing or selling copies of the standards. For example, many standards are only available for private viewing behind paywalls. However, government agencies may incorporate these same standards into regulations without the SDOs’ knowledge or consent. This creates tension between SDOs’ understandable wish to be compensated for the fruits of their labor and the regulated public’s need for open access to these standards.

SDOs also face uncertainty about whether the government edicts doctrine—which denies copyright protection to works created by government officials—extends to private entities’ work when it is incorporated into government regulations. At least some cases suggest it may. As the D.C. Circuit stated in American Society for Testing and Materials v. Public.Resource.Org, legal text “falls plainly outside the realm of copyright protection.”

On the surface, the Pro Codes Act aims to strike a fair balance between competing concerns over fair compensation and public access."

Teaching and the Legal Landscape: Primer on the Fair Use Doctrine in copyright law; University of Pittsburgh University Times, July 25, 2024

 J. D. WRIGHT, University of Pittsburgh University Times ; Teaching and the Legal Landscape: Primer on the Fair Use Doctrine in copyright law

"As the fall semester hurtles toward us, along with decisions about what readings and other materials we’ll assign as homework or present in class, think about the implications of posting copyrighted works on Canvas or presenting them in class. What is acceptable, and what trespasses beyond permissible bounds? Considerations like these are ripe for exploration as we engage in the regular ritual of preparing courses for a new term.

Our question is: Does the Fair Use Doctrine exempt a copyrighted work from the general rule requiring rights-holder approval before someone else can distribute that content?

Applying U.S. intellectual property law, including the Fair Use Doctrine, can be a maddeningly fact-specific process that makes broad generalizations incomplete, unreliable, or even dangerous. However, we can outline some basic principles and a set of steps to follow as you make important decisions about what to redistribute or duplicate—and what not to. In close cases, play it safe or seek legal guidance; this article provides background and context, not legal advice."

Thursday, July 25, 2024

Philip Glass Says Crimean Theater Is Using His Music Without Permission; The Daily Beast, July 25, 2024

 Clay Walker, The Daily Beast; Philip Glass Says Crimean Theater Is Using His Music Without Permission

"Legendary American composer Philip Glass had some harsh words after learning that a theater in Russian-annexed Crimea plans to use his music and name as part of a new show. In a letter posted to X, Glass explained that he had learned a new ballet called Wuthering Heights is set to open at the Sevastopol Opera and Ballet Theater—using works he had penned without his consent. “No permission for the use of my music in the ballet or the use of my name in the advertising and promotion of the ballet was ever requested of me or given by me. The use of my music and the use of my name without my consent is in violation of the Berne Convention for the Protection of Literary and Artistic works to which the Russian Federation is a signatory. It is an act of piracy,” Glass wrote."

Data Owners Are Increasingly Blocking AI Companies From Using Their IP; PetaPixel, July 22, 2024

 MATT GROWCOOT, PetaPixel; Data Owners Are Increasingly Blocking AI Companies From Using Their IP

"Training data for generative AI models like Midjourney and ChatGPT is beginning to dry up, according to a new study.

The world of artificial intelligence moves fast. While court cases attempt to decide whether using copyrighted text, images, and video to train AI models is “fair use”, as tech companies argue, those same firms are already running out of new data to harvest. 

As generative AI has proliferated and become well-known, there has been a well-documented backlash and many have taken action by denying access to their online data — including photographers.

An MIT research group led the study which looked at 14,000 web domains that are included in three major AI training data sets. 

The study, published by the Data Provenance System, discovered an “emerging crisis in consent” as online publishers pull up the drawbridge by not giving permission to AI crawlers. 

The researchers looked at the C4, RefineWeb, and Dolma data sets and found that five percent of all the data is now restricted. But that number jumps to 25 percent when looking at the highest-quality sources. Generative AI needs a good caliber of data to produce good models."

A new tool for copyright holders can show if their work is in AI training data; MIT Technology Review, July 25, 2024

, MIT Technology Review; A new tool for copyright holders can show if their work is in AI training data

"Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set. 

Now they have a new way to prove it: “copyright traps” developed by a team at Imperial College London, pieces of hidden text that allow writers and publishers to subtly mark their work in order to later detect whether it has been used in AI models or not. The idea is similar to traps that have been used by copyright holders throughout history—strategies like including fake locations on a map or fake words in a dictionary. 

These AI copyright traps tap into one of the biggest fights in AI. A number of publishers and writers are in the middle of litigation against tech companies, claiming their intellectual property has been scraped into AI training data sets without their permission. The New York Times’ ongoing case against OpenAI is probably the most high-profile of these.  

The code to generate and detect traps is currently available on GitHub, but the team also intends to build a tool that allows people to generate and insert copyright traps themselves."

Who will control the future of AI?; The Washington Post, July 25, 2024

 , The Washington Post; Who will control the future of AI?

"Who will control the future of AI?

That is the urgent question of our time. The rapid progress being made on artificial intelligence means that we face a strategic choice about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?"

Tuesday, July 23, 2024

The Data That Powers A.I. Is Disappearing Fast; The New York Times, July 19, 2024

Kevin Roose , The New York Times; The Data That Powers A.I. Is Disappearing Fast

"For years, the people building powerful artificial intelligence systems have used enormous troves of text, images and videos pulled from the internet to train their models.

Now, that data is drying up.

Over the past year, many of the most important web sources used for training A.I. models have restricted the use of their data, according to a study published this week by the Data Provenance Initiative, an M.I.T.-led research group.

The study, which looked at 14,000 web domains that are included in three commonly used A.I. training data sets, discovered an “emerging crisis in consent,” as publishers and online platforms have taken steps to prevent their data from being harvested.

The researchers estimate that in the three data sets — called C4, RefinedWeb and Dolma — 5 percent of all data, and 25 percent of data from the highest-quality sources, has been restricted. Those restrictions are set up through the Robots Exclusion Protocol, a decades-old method for website owners to prevent automated bots from crawling their pages using a file called robots.txt."

Monday, July 22, 2024

What Is The Future Of Intellectual Property In A Generative AI World?; Forbes, July 18, 2024

 Ron Schmelzer, Forbes; What Is The Future Of Intellectual Property In A Generative AI World?

"Taking a More Sophisticated and Nuanced Approach to GenAI IP Issues

Clearly we’re at a crossroads when it comes to intellectual property and the answers aren’t cut and dry. Simply preventing IP protection of AI-generated works might not be possible if AI systems are used in any significant portion of the creation process. Likewise, prohibiting AI systems from making use of pre-existing IP-protected works might be a Pandora’s box we can’t close. We need to find new approaches that balance the ability to use AI tools as part of the creation process with IP protection of both existing works and the outputs of GenAI systems.

This means a more sophisticated and nuanced approach to clarifying the legal status of data used in AI training and developing mechanisms to ensure that AI-generated outputs respect existing IP rights, while still providing protection for creative outputs that have involved significant elements of human creativity in curation and prompting, even if the outputs are transformative recombinations of training data. Clearly we’re in the early days of the continued evolution of what intellectual property means."

This might be the most important job in AI; Business Insider, July 21, 2024

  , Business Insider; This might be the most important job in AI

"Generative AI can hallucinate, spread misinformation, and reinforce biases against marginalized groups if it's not managed properly. Given that the technology relies on volumes of sensitive data, the potential for data breaches is also high. At worst, though, there's the danger that the more sophisticated it becomes, the less likely it is to align with human values.

With great power, then, comes great responsibility, and companies that make money from generative AI must also ensure they regulate it.

That's where a chief ethics officer comes in...

Those who are successful in the role ideally have four areas of expertise, according to Mills. They should have a technical grasp over generative AI, experience building and deploying products, an understanding of the major laws and regulations around AI, and significant experience hiring and making decisions at an organization."

The Fast-Moving Race Between Gen-AI and Copyright Law; Baker Donelson, July 10, 2024

Scott M. Douglass and Dominic Rota, Baker Donelson ; The Fast-Moving Race Between Gen-AI and Copyright Law

"It is still an open question whether plaintiffs will succeed in showing that use of copyrighted works to train generative AI constitutes copyright infringement and be able to overcome the fair use defense or succeed in showing that generative AI developers are removing CMI in violation of the DMCA.

The government has made some moves in the past few months to resolve these issues. The U.S. Copyright Office started an inquiry in August 2023, seeking public comments on copyright law and policy issues raised by AI systems, and Rep. Adam Schiff (D-Calif.) introduced a new bill in April 2024, that would require people creating a training dataset for a generative AI system to submit to the Register of Copyrights a detailed summary of any copyrighted works used in training. These initiatives will most likely take some time, meaning that currently pending litigation is vitally important for defining copyright law as it applies to generative AI.

Recent licensing deals with news publishers appear to be anywhere from $1 million to $60 million per year, meaning that AI companies will have to pay an enormous amount to license all the copyrighted works necessary to train their generative AI models effectively. However, as potential damages in a copyright infringement case could be billions of dollars, as claimed by Getty Images and other plaintiffs, developers of generative AI programs should seriously consider licensing any copyrighted works used as training data."

Friday, July 19, 2024

The Media Industry’s Race To License Content For AI; Forbes, July 18, 2024

 Bill Rosenblatt, Forbes; The Media Industry’s Race To License Content For AI

"AI content licensing initiatives abound. More and more media companies have reached license agreements with AI companies individually. Several startups have formed to aggregate content into large collections for AI platforms to license in one-stop shopping arrangements known in the jargon as blanket licenses. There are now so many such startups that last month they formed a trade association—the Dataset Providers Alliance—to organize them for advocacy.

Ironically, the growing volume of all this activity could jeopardize its value for copyright owners and AI platforms alike.

It will take years before the panoply of lawsuits yield any degree of clarity in the legal rules for copyright in the AI age; we’re in the second year of what is typically a decade-long process for copyright laws to adapt to disruptive technologies. One reason for copyright owners to organize now to provide licenses for AI is that—as we’ve learned from analogous situations in the past—both courts and Congress will consider is how easy it is for the AI companies to license content properly in determining whether licensing is required."

Thursday, July 18, 2024

Can Donald Trump or Joe Biden play whatever music they want at a rally or convention? Legal expert says it’s more complicated; Northeastern Global News, July 17, 2024

 , Northeastern Global News; Can Donald Trump or Joe Biden play whatever music they want at a rally or convention? Legal expert says it’s more complicated

"It turns out that what music gets played at a political rally or campaign event involves more consideration –– and legal know-how –– than you might think. It’s an area where the law, performing arts and politics intersect in sometimes uncomfortable ways, particularly for a politician like Trump, who has an entire Wikipedia page dedicated to listing musicians who oppose his use of their music.

So, what rights do politicians have when it comes to playing music during rallies, conventions or campaign events? The answer, says Alexandra Roberts, a professor of law and media at Northeastern University, is a little complicated."

Wednesday, July 17, 2024

IBM reaffirms its commitment to the Rome Call for AI ethics; IBM Research, July 15, 2024

Mike Murphy, IBM Research; IBM reaffirms its commitment to the Rome Call for AI ethics

"There have been moments throughout history where the impacts of a new technology have been world-altering. Perhaps this is why the Vatican, along with leaders from most major religions across the world, chose to host a gathering to discuss the implications for future development of AI in Hiroshima, Japan.

Last year, representatives from the Abrahamic religions came together at the Vatican to sign the Rome Call for AI Ethics, which IBM first signed with other industry and government leaders when it was launched by the Vatican in 2020. It's a document where the signatories committed to pursue an ethical approach to AI development and promote the human-centric and inclusive development of AI, rather than replacing humanity.

At Hiroshima this year, the Rome Call was signed by representatives of many of the great Eastern religions, and past signees like IBM reaffirmed their commitment."

CCC Launches Collective Licensing for AI; Publishers Weekly, July 16, 2024

Ed Nawotka , Publishers Weekly; CCC Launches Collective Licensing for AI

"Copyright Clearance Center (CCC) has launched a collective licensing solution for the internal use of copyrighted materials in AI systems. The new service, which became available July 1, is an addition to CCC's existing Annual Copyright Licenses (ACL) service and aims to provide a streamlined method for companies to access a consistent set of rights across multiple rightsholders, while ensuring compensation for content creators...

Lauren Tulloch, VP and managing director of CCC, clarified that the license covers internal, not public, use of the copyrighted materials. “Use cases could include people doing article summarization, literature review, and market surveillance,” Tulloch said...

While there has been progress with direct licenses in the past two years since the introduction of AI, and all publishing companies are employing AI internally, a general solution has been lacking.

CCCs solution aims to go some way toward rectifying the situation. “We now have a situation where you can litigate, legislate or license,” Armstrong said. “Litigating can take years, legislating can take decades, a quarter of a century in some cases…licensing is efficient, effective and time tested.”"

Japanese media say AI search infringes copyright, urge legal reform; Kyodo News, July 17, 2024

 KYODO NEWS Japanese media say AI search infringes copyright, urge legal reform

"Artificial intelligence-powered search engines provided by U.S. tech giants like Google LLC and Microsoft Corp. likely infringe on copyright, an association run by Japanese mass media said Wednesday.

The Japan Newspaper Publishers and Editors Association, in a statement, called for companies operating such services to obtain consent from news organizations as search responses often resemble articles that are sourced without permission.

The association analyzed that AI search engines sometimes return inaccurate responses as they inappropriately reuse or modify articles and stressed that the companies should ensure the accuracy and reliability of their services before launch.

The association also urged the Japanese government to review and revise laws related to intellectual property, such as the copyright act, as a matter of urgency."

Tuesday, July 16, 2024

AI copyright case; CTV News, July 13, 2024

CTV News ; AI copyright case

"The Federal Court of Canada will decide if artificial intelligence can be considered an author under copyright laws. Colton Praill reports."

USPTO issues AI subject matter eligibility guidance; United States Patent and Trademark Office (USPTO), July 16, 2024

United States Patent and Trademark Office (USPTO) ; USPTO issues AI subject matter eligibility guidance

"The U.S. Patent and Trademark Office (USPTO) has issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including in artificial intelligence (AI). This guidance update will assist USPTO personnel and stakeholders in determining subject matter eligibility under patent law (35 § U.S.C. 101) of AI inventions. This latest update builds on previous guidance by providing further clarity and consistency to how the USPTO and applicants should evaluate subject matter eligibility of claims in patent applications and patents involving inventions related to AI technology. The guidance update also announces three new examples of how to apply this guidance throughout a wide range of technologies. 

The guidance update, which goes into effect on July 17, 2024, provides a background on the USPTO’s efforts related to AI and subject matter eligibility, an overview of the USPTO’s patent subject matter eligibility guidance, and additional discussion on certain areas of the guidance that are particularly relevant to AI inventions, including discussions of Federal Circuit decisions on subject matter eligibility. 

“The USPTO remains committed to fostering and protecting innovation in critical and emerging technologies, including AI,” said Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO. “We look forward to hearing public feedback on this guidance update, which will provide further clarity on evaluating subject matter eligibility of AI inventions while incentivizing innovations needed to solve world and community problems.” 

The three new examples provide additional analyses under 35 § U.S.C. 101 of hypothetical claims in certain situations to address particular inquiries, such as whether a claim recites an abstract idea or whether a claim integrates the abstract idea into a practical application. They are intended to assist USPTO personnel in applying the USPTO’s subject matter eligibility guidance to AI inventions during patent examination, appeal, and post-grant proceedings. The examples are available on our AI-related resources webpage and our patent eligibility page on our website.  

The USPTO continues to be directly involved in the development of legal and policy measures related to the impact of AI on all forms of intellectual property. The guidance update delivers on the USPTO’s obligations under the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence o provide guidance to examiners and the public on the impact of AI and issues at the intersection of AI and IP, including patent subject matter eligibility. This follows our announcement earlier this year on Inventorship guidance for AI-assisted inventions, as well as AI guidance for practitioners and a request for comments on the impact of AI on certain patentability considerations, including what qualifies as prior art and the assessment of the level of ordinary skills in the art (comments accepted until July 29, 2024). 

The full text of the guidance update on patent subject matter eligibility is available on our Latest AI news and reports webpageand the corresponding examples are available on our AI-related resources webpage. The USPTO will accept public comments on the guidance update and the examples through September 16, 2024. Please see the Federal Register Notice for instructions on submitting comments."

Record labels sue Verizon for not disconnecting pirates’ Internet service; Ars Technica, July 15, 2024

 , Ars Technica; Record labels sue Verizon for not disconnecting pirates’ Internet service

"Major record labels sued Verizon on Friday, alleging that the Internet service provider violated copyright law by continuing to serve customers accused of pirating music. Verizon "knowingly provides its high-speed service to a massive community of online pirates," said the complaint filed in US District Court for the Southern District of New York.

Universal, Sony, and Warner say they have sent over 340,000 copyright infringement notices to Verizon since early 2020. "Those notices identify specific subscribers on Verizon's network stealing Plaintiffs' sound recordings through peer-to-peer ('P2P') file-sharing networks that are notorious hotbeds for copyright infringement," the lawsuit said."

Monday, July 15, 2024

Holy See welcomes ‘significant’ new treaty on intellectual property; Vatican News, July 10, 2024

 Joseph Tulloch, Vatican News ; Holy See welcomes ‘significant’ new treaty on intellectual property

"Archbishop Ettore Balestrero, the Permanent Observer of the Holy See to the United Nations and Other International Organizations in Geneva, has welcomed a historic new treaty on intellectual property.

In an address to members states of the UN's World Intellectual Property Organisation (WIPO), the Archbishop called the treaty a “significant step forward”.

The treaty


WIPO member states adopted the agreement – which regards “Intellectual Property, Genetic Resources and Associated Traditional Knowledge – in May of this year.

The treaty establishes a new disclosure requirement in international law for patent applicants whose inventions are based on genetic resources and/or associated traditional knowledge.

It was the first WIPO treaty in over a decade, as well as the first to evr deal with the genetic resources and traditional knowledge of indigenous peoples."

Friday, July 12, 2024

AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections; Digiday, July 12, 2024

 Marty Swant , Digiday; AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections

"The U.S. Senate Commerce Committee on Thursday held a hearing to address a range of concerns about the intersection of AI and privacy. While some lawmakers expressed concern about AI accelerating risks – such as online surveillance, scams, hyper-targeting ads and discriminatory business practices — others cautioned regulations might further protect tech giants and burden smaller businesses."

Thursday, July 11, 2024

Religious education group sues Fort Wayne man over copyright claims; The Journal Gazette, July 8, 2024

 , The Journal Gazette; Religious education group sues Fort Wayne man over copyright claims

"LifeWise claims in its lawsuit that Parrish signed up online to volunteer with the hope of publishing information that might damage the organization’s reputation and prompt parents to oppose LifeWise Academy chapters in their communities.

Parrish accessed LifeWise’s information storage systems, downloaded internal documents and posted them along with the LifeWise curriculum on his website, parentsagainstlifewise.online, according to the lawsuit. It said Parrish also posted links to the curriculum on the Facebook group.

“He improperly obtained our entire copyright protected curriculum, and he posted to his website without our permission,” LifeWise said in a statement Monday.

LifeWise tried to get Parrish to voluntarily remove its curriculum, but the complaint said the organization’s efforts – including an attorney’s cease-and-desist letter and social media messages the chief operating officer sent him – were unsuccessful.

The lawsuit said Parrish responded to the letter with a meme stating, “It’s called fair use (expletive).”

LifeWise disagrees. In its statement, the organization said its curriculum is licensed through a publisher called LifeWay, and anyone is welcome to purchase the LifeWay curriculum through its website.

“Posting the entire curriculum is not ‘fair use,’ and we are confident that the judge will agree,” LifeWise said Monday."

Wednesday, July 10, 2024

Considering the Ethics of AI Assistants; Tech Policy Press, July 7, 2024

 JUSTIN HENDRIX , Tech Policy Press ; Considering the Ethics of AI Assistants

"Just a couple of weeks before Pichai took the stage, in April, Google DeepMind published a paper that boasts 57 authors, including experts from a range of disciplines from different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.”"

Tuesday, July 9, 2024

Record labels sue AI music startups for copyright infringement; WBUR Here & Now, July 8, 2024

 WBUR Here & Now; Record labels sue AI music startups for copyright infringement

"Major record labels including Sony, Universal Music Group and Warner are suing two music startups that use artificial intelligence. The labels say Suno and Udio rely on mass copyright infringement, echoing similar complaints from authors, publishers and artists who argue that generative AI infringes on copyright.

Here & Now's Lisa Mullins discusses the cases with Ina Fried, chief technology correspondent for Axios."

Monday, July 8, 2024

China is the runaway leader in generative AI patent applications followed by the US, the UN says; AP, July 3, 2024

JAMEY KEATEN, AP;  China is the runaway leader in generative AI patent applications followed by the US, the UN says

"China has requested far more patents than any other country when it comes to generative AI, the U.N. intellectual property agency said Wednesday, with the United States a distant second...

The new report on patents, the first of its kind, aims to track patent applications as a possible indication of trends in artificial intelligence. It focuses only on generative AI and excludes artificial intelligence more broadly, which includes technologies like facial recognition or autonomous driving."

Five Questions to Ask Before Implementing Generative AI; Markkula Center for Applied Ethics at Santa Clara University, July 3, 2024

Ann Skeet, Markkula Center for Applied Ethics at Santa Clara University ; Five Questions to Ask Before Implementing Generative AI

"While you don’t want to get too far into the weeds, you can ask for the sources of data that the system is being trained on, says Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics and coauthor of Ethics in the Age of Disruptive Technologies: An Operational Roadmap. “[Directors] can also advise proactively choosing an AI system that has an identifiable training data set.”"