Thursday, July 25, 2024

Data Owners Are Increasingly Blocking AI Companies From Using Their IP; PetaPixel, July 22, 2024

 MATT GROWCOOT, PetaPixel; Data Owners Are Increasingly Blocking AI Companies From Using Their IP

"Training data for generative AI models like Midjourney and ChatGPT is beginning to dry up, according to a new study.

The world of artificial intelligence moves fast. While court cases attempt to decide whether using copyrighted text, images, and video to train AI models is “fair use”, as tech companies argue, those same firms are already running out of new data to harvest. 

As generative AI has proliferated and become well-known, there has been a well-documented backlash and many have taken action by denying access to their online data — including photographers.

An MIT research group led the study which looked at 14,000 web domains that are included in three major AI training data sets. 

The study, published by the Data Provenance System, discovered an “emerging crisis in consent” as online publishers pull up the drawbridge by not giving permission to AI crawlers. 

The researchers looked at the C4, RefineWeb, and Dolma data sets and found that five percent of all the data is now restricted. But that number jumps to 25 percent when looking at the highest-quality sources. Generative AI needs a good caliber of data to produce good models."

A new tool for copyright holders can show if their work is in AI training data; MIT Technology Review, July 25, 2024

, MIT Technology Review; A new tool for copyright holders can show if their work is in AI training data

"Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set. 

Now they have a new way to prove it: “copyright traps” developed by a team at Imperial College London, pieces of hidden text that allow writers and publishers to subtly mark their work in order to later detect whether it has been used in AI models or not. The idea is similar to traps that have been used by copyright holders throughout history—strategies like including fake locations on a map or fake words in a dictionary. 

These AI copyright traps tap into one of the biggest fights in AI. A number of publishers and writers are in the middle of litigation against tech companies, claiming their intellectual property has been scraped into AI training data sets without their permission. The New York Times’ ongoing case against OpenAI is probably the most high-profile of these.  

The code to generate and detect traps is currently available on GitHub, but the team also intends to build a tool that allows people to generate and insert copyright traps themselves."

Who will control the future of AI?; The Washington Post, July 25, 2024

 , The Washington Post; Who will control the future of AI?

"Who will control the future of AI?

That is the urgent question of our time. The rapid progress being made on artificial intelligence means that we face a strategic choice about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?"

Tuesday, July 23, 2024

The Data That Powers A.I. Is Disappearing Fast; The New York Times, July 19, 2024

Kevin Roose , The New York Times; The Data That Powers A.I. Is Disappearing Fast

"For years, the people building powerful artificial intelligence systems have used enormous troves of text, images and videos pulled from the internet to train their models.

Now, that data is drying up.

Over the past year, many of the most important web sources used for training A.I. models have restricted the use of their data, according to a study published this week by the Data Provenance Initiative, an M.I.T.-led research group.

The study, which looked at 14,000 web domains that are included in three commonly used A.I. training data sets, discovered an “emerging crisis in consent,” as publishers and online platforms have taken steps to prevent their data from being harvested.

The researchers estimate that in the three data sets — called C4, RefinedWeb and Dolma — 5 percent of all data, and 25 percent of data from the highest-quality sources, has been restricted. Those restrictions are set up through the Robots Exclusion Protocol, a decades-old method for website owners to prevent automated bots from crawling their pages using a file called robots.txt."

Monday, July 22, 2024

What Is The Future Of Intellectual Property In A Generative AI World?; Forbes, July 18, 2024

 Ron Schmelzer, Forbes; What Is The Future Of Intellectual Property In A Generative AI World?

"Taking a More Sophisticated and Nuanced Approach to GenAI IP Issues

Clearly we’re at a crossroads when it comes to intellectual property and the answers aren’t cut and dry. Simply preventing IP protection of AI-generated works might not be possible if AI systems are used in any significant portion of the creation process. Likewise, prohibiting AI systems from making use of pre-existing IP-protected works might be a Pandora’s box we can’t close. We need to find new approaches that balance the ability to use AI tools as part of the creation process with IP protection of both existing works and the outputs of GenAI systems.

This means a more sophisticated and nuanced approach to clarifying the legal status of data used in AI training and developing mechanisms to ensure that AI-generated outputs respect existing IP rights, while still providing protection for creative outputs that have involved significant elements of human creativity in curation and prompting, even if the outputs are transformative recombinations of training data. Clearly we’re in the early days of the continued evolution of what intellectual property means."

This might be the most important job in AI; Business Insider, July 21, 2024

  , Business Insider; This might be the most important job in AI

"Generative AI can hallucinate, spread misinformation, and reinforce biases against marginalized groups if it's not managed properly. Given that the technology relies on volumes of sensitive data, the potential for data breaches is also high. At worst, though, there's the danger that the more sophisticated it becomes, the less likely it is to align with human values.

With great power, then, comes great responsibility, and companies that make money from generative AI must also ensure they regulate it.

That's where a chief ethics officer comes in...

Those who are successful in the role ideally have four areas of expertise, according to Mills. They should have a technical grasp over generative AI, experience building and deploying products, an understanding of the major laws and regulations around AI, and significant experience hiring and making decisions at an organization."

The Fast-Moving Race Between Gen-AI and Copyright Law; Baker Donelson, July 10, 2024

Scott M. Douglass and Dominic Rota, Baker Donelson ; The Fast-Moving Race Between Gen-AI and Copyright Law

"It is still an open question whether plaintiffs will succeed in showing that use of copyrighted works to train generative AI constitutes copyright infringement and be able to overcome the fair use defense or succeed in showing that generative AI developers are removing CMI in violation of the DMCA.

The government has made some moves in the past few months to resolve these issues. The U.S. Copyright Office started an inquiry in August 2023, seeking public comments on copyright law and policy issues raised by AI systems, and Rep. Adam Schiff (D-Calif.) introduced a new bill in April 2024, that would require people creating a training dataset for a generative AI system to submit to the Register of Copyrights a detailed summary of any copyrighted works used in training. These initiatives will most likely take some time, meaning that currently pending litigation is vitally important for defining copyright law as it applies to generative AI.

Recent licensing deals with news publishers appear to be anywhere from $1 million to $60 million per year, meaning that AI companies will have to pay an enormous amount to license all the copyrighted works necessary to train their generative AI models effectively. However, as potential damages in a copyright infringement case could be billions of dollars, as claimed by Getty Images and other plaintiffs, developers of generative AI programs should seriously consider licensing any copyrighted works used as training data."

Friday, July 19, 2024

The Media Industry’s Race To License Content For AI; Forbes, July 18, 2024

 Bill Rosenblatt, Forbes; The Media Industry’s Race To License Content For AI

"AI content licensing initiatives abound. More and more media companies have reached license agreements with AI companies individually. Several startups have formed to aggregate content into large collections for AI platforms to license in one-stop shopping arrangements known in the jargon as blanket licenses. There are now so many such startups that last month they formed a trade association—the Dataset Providers Alliance—to organize them for advocacy.

Ironically, the growing volume of all this activity could jeopardize its value for copyright owners and AI platforms alike.

It will take years before the panoply of lawsuits yield any degree of clarity in the legal rules for copyright in the AI age; we’re in the second year of what is typically a decade-long process for copyright laws to adapt to disruptive technologies. One reason for copyright owners to organize now to provide licenses for AI is that—as we’ve learned from analogous situations in the past—both courts and Congress will consider is how easy it is for the AI companies to license content properly in determining whether licensing is required."

Thursday, July 18, 2024

Can Donald Trump or Joe Biden play whatever music they want at a rally or convention? Legal expert says it’s more complicated; Northeastern Global News, July 17, 2024

 , Northeastern Global News; Can Donald Trump or Joe Biden play whatever music they want at a rally or convention? Legal expert says it’s more complicated

"It turns out that what music gets played at a political rally or campaign event involves more consideration –– and legal know-how –– than you might think. It’s an area where the law, performing arts and politics intersect in sometimes uncomfortable ways, particularly for a politician like Trump, who has an entire Wikipedia page dedicated to listing musicians who oppose his use of their music.

So, what rights do politicians have when it comes to playing music during rallies, conventions or campaign events? The answer, says Alexandra Roberts, a professor of law and media at Northeastern University, is a little complicated."

Wednesday, July 17, 2024

IBM reaffirms its commitment to the Rome Call for AI ethics; IBM Research, July 15, 2024

Mike Murphy, IBM Research; IBM reaffirms its commitment to the Rome Call for AI ethics

"There have been moments throughout history where the impacts of a new technology have been world-altering. Perhaps this is why the Vatican, along with leaders from most major religions across the world, chose to host a gathering to discuss the implications for future development of AI in Hiroshima, Japan.

Last year, representatives from the Abrahamic religions came together at the Vatican to sign the Rome Call for AI Ethics, which IBM first signed with other industry and government leaders when it was launched by the Vatican in 2020. It's a document where the signatories committed to pursue an ethical approach to AI development and promote the human-centric and inclusive development of AI, rather than replacing humanity.

At Hiroshima this year, the Rome Call was signed by representatives of many of the great Eastern religions, and past signees like IBM reaffirmed their commitment."

CCC Launches Collective Licensing for AI; Publishers Weekly, July 16, 2024

Ed Nawotka , Publishers Weekly; CCC Launches Collective Licensing for AI

"Copyright Clearance Center (CCC) has launched a collective licensing solution for the internal use of copyrighted materials in AI systems. The new service, which became available July 1, is an addition to CCC's existing Annual Copyright Licenses (ACL) service and aims to provide a streamlined method for companies to access a consistent set of rights across multiple rightsholders, while ensuring compensation for content creators...

Lauren Tulloch, VP and managing director of CCC, clarified that the license covers internal, not public, use of the copyrighted materials. “Use cases could include people doing article summarization, literature review, and market surveillance,” Tulloch said...

While there has been progress with direct licenses in the past two years since the introduction of AI, and all publishing companies are employing AI internally, a general solution has been lacking.

CCCs solution aims to go some way toward rectifying the situation. “We now have a situation where you can litigate, legislate or license,” Armstrong said. “Litigating can take years, legislating can take decades, a quarter of a century in some cases…licensing is efficient, effective and time tested.”"

Japanese media say AI search infringes copyright, urge legal reform; Kyodo News, July 17, 2024

 KYODO NEWS Japanese media say AI search infringes copyright, urge legal reform

"Artificial intelligence-powered search engines provided by U.S. tech giants like Google LLC and Microsoft Corp. likely infringe on copyright, an association run by Japanese mass media said Wednesday.

The Japan Newspaper Publishers and Editors Association, in a statement, called for companies operating such services to obtain consent from news organizations as search responses often resemble articles that are sourced without permission.

The association analyzed that AI search engines sometimes return inaccurate responses as they inappropriately reuse or modify articles and stressed that the companies should ensure the accuracy and reliability of their services before launch.

The association also urged the Japanese government to review and revise laws related to intellectual property, such as the copyright act, as a matter of urgency."

Tuesday, July 16, 2024

AI copyright case; CTV News, July 13, 2024

CTV News ; AI copyright case

"The Federal Court of Canada will decide if artificial intelligence can be considered an author under copyright laws. Colton Praill reports."

USPTO issues AI subject matter eligibility guidance; United States Patent and Trademark Office (USPTO), July 16, 2024

United States Patent and Trademark Office (USPTO) ; USPTO issues AI subject matter eligibility guidance

"The U.S. Patent and Trademark Office (USPTO) has issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including in artificial intelligence (AI). This guidance update will assist USPTO personnel and stakeholders in determining subject matter eligibility under patent law (35 § U.S.C. 101) of AI inventions. This latest update builds on previous guidance by providing further clarity and consistency to how the USPTO and applicants should evaluate subject matter eligibility of claims in patent applications and patents involving inventions related to AI technology. The guidance update also announces three new examples of how to apply this guidance throughout a wide range of technologies. 

The guidance update, which goes into effect on July 17, 2024, provides a background on the USPTO’s efforts related to AI and subject matter eligibility, an overview of the USPTO’s patent subject matter eligibility guidance, and additional discussion on certain areas of the guidance that are particularly relevant to AI inventions, including discussions of Federal Circuit decisions on subject matter eligibility. 

“The USPTO remains committed to fostering and protecting innovation in critical and emerging technologies, including AI,” said Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO. “We look forward to hearing public feedback on this guidance update, which will provide further clarity on evaluating subject matter eligibility of AI inventions while incentivizing innovations needed to solve world and community problems.” 

The three new examples provide additional analyses under 35 § U.S.C. 101 of hypothetical claims in certain situations to address particular inquiries, such as whether a claim recites an abstract idea or whether a claim integrates the abstract idea into a practical application. They are intended to assist USPTO personnel in applying the USPTO’s subject matter eligibility guidance to AI inventions during patent examination, appeal, and post-grant proceedings. The examples are available on our AI-related resources webpage and our patent eligibility page on our website.  

The USPTO continues to be directly involved in the development of legal and policy measures related to the impact of AI on all forms of intellectual property. The guidance update delivers on the USPTO’s obligations under the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence o provide guidance to examiners and the public on the impact of AI and issues at the intersection of AI and IP, including patent subject matter eligibility. This follows our announcement earlier this year on Inventorship guidance for AI-assisted inventions, as well as AI guidance for practitioners and a request for comments on the impact of AI on certain patentability considerations, including what qualifies as prior art and the assessment of the level of ordinary skills in the art (comments accepted until July 29, 2024). 

The full text of the guidance update on patent subject matter eligibility is available on our Latest AI news and reports webpageand the corresponding examples are available on our AI-related resources webpage. The USPTO will accept public comments on the guidance update and the examples through September 16, 2024. Please see the Federal Register Notice for instructions on submitting comments."

Record labels sue Verizon for not disconnecting pirates’ Internet service; Ars Technica, July 15, 2024

 , Ars Technica; Record labels sue Verizon for not disconnecting pirates’ Internet service

"Major record labels sued Verizon on Friday, alleging that the Internet service provider violated copyright law by continuing to serve customers accused of pirating music. Verizon "knowingly provides its high-speed service to a massive community of online pirates," said the complaint filed in US District Court for the Southern District of New York.

Universal, Sony, and Warner say they have sent over 340,000 copyright infringement notices to Verizon since early 2020. "Those notices identify specific subscribers on Verizon's network stealing Plaintiffs' sound recordings through peer-to-peer ('P2P') file-sharing networks that are notorious hotbeds for copyright infringement," the lawsuit said."

Monday, July 15, 2024

Holy See welcomes ‘significant’ new treaty on intellectual property; Vatican News, July 10, 2024

 Joseph Tulloch, Vatican News ; Holy See welcomes ‘significant’ new treaty on intellectual property

"Archbishop Ettore Balestrero, the Permanent Observer of the Holy See to the United Nations and Other International Organizations in Geneva, has welcomed a historic new treaty on intellectual property.

In an address to members states of the UN's World Intellectual Property Organisation (WIPO), the Archbishop called the treaty a “significant step forward”.

The treaty


WIPO member states adopted the agreement – which regards “Intellectual Property, Genetic Resources and Associated Traditional Knowledge – in May of this year.

The treaty establishes a new disclosure requirement in international law for patent applicants whose inventions are based on genetic resources and/or associated traditional knowledge.

It was the first WIPO treaty in over a decade, as well as the first to evr deal with the genetic resources and traditional knowledge of indigenous peoples."

Friday, July 12, 2024

AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections; Digiday, July 12, 2024

 Marty Swant , Digiday; AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections

"The U.S. Senate Commerce Committee on Thursday held a hearing to address a range of concerns about the intersection of AI and privacy. While some lawmakers expressed concern about AI accelerating risks – such as online surveillance, scams, hyper-targeting ads and discriminatory business practices — others cautioned regulations might further protect tech giants and burden smaller businesses."

Thursday, July 11, 2024

Religious education group sues Fort Wayne man over copyright claims; The Journal Gazette, July 8, 2024

 , The Journal Gazette; Religious education group sues Fort Wayne man over copyright claims

"LifeWise claims in its lawsuit that Parrish signed up online to volunteer with the hope of publishing information that might damage the organization’s reputation and prompt parents to oppose LifeWise Academy chapters in their communities.

Parrish accessed LifeWise’s information storage systems, downloaded internal documents and posted them along with the LifeWise curriculum on his website, parentsagainstlifewise.online, according to the lawsuit. It said Parrish also posted links to the curriculum on the Facebook group.

“He improperly obtained our entire copyright protected curriculum, and he posted to his website without our permission,” LifeWise said in a statement Monday.

LifeWise tried to get Parrish to voluntarily remove its curriculum, but the complaint said the organization’s efforts – including an attorney’s cease-and-desist letter and social media messages the chief operating officer sent him – were unsuccessful.

The lawsuit said Parrish responded to the letter with a meme stating, “It’s called fair use (expletive).”

LifeWise disagrees. In its statement, the organization said its curriculum is licensed through a publisher called LifeWay, and anyone is welcome to purchase the LifeWay curriculum through its website.

“Posting the entire curriculum is not ‘fair use,’ and we are confident that the judge will agree,” LifeWise said Monday."

Wednesday, July 10, 2024

Considering the Ethics of AI Assistants; Tech Policy Press, July 7, 2024

 JUSTIN HENDRIX , Tech Policy Press ; Considering the Ethics of AI Assistants

"Just a couple of weeks before Pichai took the stage, in April, Google DeepMind published a paper that boasts 57 authors, including experts from a range of disciplines from different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.”"

Tuesday, July 9, 2024

Record labels sue AI music startups for copyright infringement; WBUR Here & Now, July 8, 2024

 WBUR Here & Now; Record labels sue AI music startups for copyright infringement

"Major record labels including Sony, Universal Music Group and Warner are suing two music startups that use artificial intelligence. The labels say Suno and Udio rely on mass copyright infringement, echoing similar complaints from authors, publishers and artists who argue that generative AI infringes on copyright.

Here & Now's Lisa Mullins discusses the cases with Ina Fried, chief technology correspondent for Axios."

Monday, July 8, 2024

China is the runaway leader in generative AI patent applications followed by the US, the UN says; AP, July 3, 2024

JAMEY KEATEN, AP;  China is the runaway leader in generative AI patent applications followed by the US, the UN says

"China has requested far more patents than any other country when it comes to generative AI, the U.N. intellectual property agency said Wednesday, with the United States a distant second...

The new report on patents, the first of its kind, aims to track patent applications as a possible indication of trends in artificial intelligence. It focuses only on generative AI and excludes artificial intelligence more broadly, which includes technologies like facial recognition or autonomous driving."

Five Questions to Ask Before Implementing Generative AI; Markkula Center for Applied Ethics at Santa Clara University, July 3, 2024

Ann Skeet, Markkula Center for Applied Ethics at Santa Clara University ; Five Questions to Ask Before Implementing Generative AI

"While you don’t want to get too far into the weeds, you can ask for the sources of data that the system is being trained on, says Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics and coauthor of Ethics in the Age of Disruptive Technologies: An Operational Roadmap. “[Directors] can also advise proactively choosing an AI system that has an identifiable training data set.”"

Sunday, July 7, 2024

ChatGPT's Owner Asks For Proof That 'New York Times' Articles Are Original. Here's the Media Company's Response.; Entrepreneur, July 5, 2024

SHERIN SHIBU EDITED BY JESSICA THOMAS , Entrepreneur; ChatGPT's Owner Asks For Proof That 'New York Times' Articles Are Original. Here's the Media Company's Response.

"The New York Times sued OpenAI in December for copyright infringement, alleging that the AI giant used millions of its articles to train ChatGPT. Now OpenAI is asking for proof that NYT articles are actually original — and the NYT is calling the request "irrelevant, improper, and harassing."

OpenAI lawyers filed the request in New York district court on Monday. The ChatGPT-maker asked that the Times bring forth documents proving which portions of the millions of articles in question in the case "are original to the Times and what are not...

The NYT submitted a filing on Wednesday asserting that OpenAI's request "serves no purpose other than harassment and retaliation for The Times's decision to file this lawsuit.""

Saturday, July 6, 2024

THE GREAT SCRAPE: THE CLASH BETWEEN SCRAPING AND PRIVACY; SSRN, July 3, 2024

Daniel J. SoloveGeorge Washington University Law School; Woodrow HartzogBoston University School of Law; Stanford Law School Center for Internet and SocietyTHE GREAT SCRAPETHE CLASH BETWEEN SCRAPING AND PRIVACY

"ABSTRACT

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.


Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. With scraping, data protection laws built around

these requirements are ignored.


Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others.


This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation."

Friday, July 5, 2024

A.I. ‘Friend’ for Public School Students Falls Flat; The New York Times, July 1, 2024

Dana Goldstein, The New York Times; A.I. ‘Friend’ for Public School Students Falls Flat

"A.I. companies are heavily marketing themselves to schools, which spend tens of billions of dollars annually on technology. But AllHere’s sudden breakdown illustrates some of the risks of investing taxpayer dollars in artificial intelligence, a technology with enormous potential but little track record, especially when it comes to children. There are many complicated issues at play, including privacy of student data and the accuracy of any information offered via chatbots. And A.I. may also run counter to another growing interest for education leaders and parents — reducing children’s screen time."

Wednesday, July 3, 2024

Starbucks just sued this weed business for copying its logo; Fast Company, July 2, 2024

 GRACE SNELLING, Fast Company; Starbucks just sued this weed business for copying its logo

"On the outside of its repurposed food truck, the NYC-based mobile marijuana retailer Starbuds Flowers features a logo that’s hard to miss. The familiar graphic shows a woman with a pointy crown and long wavy hair descending over her chest, surrounded by a green ring that contains the business name. In her right hand, the woman is puffing on what appears to be a lit blunt, while marijuana leaves frame the whole composition. The logo is clearly a parody of Starbucks’ Siren mascot—but the coffee giant isn’t laughing. 

On June 28, Starbucks filed a copyright and trademark infringement lawsuit against Starbuds in the U.S. District Court for the Southern District of New York."

Tuesday, July 2, 2024

Center for Investigative Reporting sues Microsoft and OpenAI for copyright infringement; Business Today, June 29, 2024

 Pranav Dixit, Business Today; Center for Investigative Reporting sues Microsoft and OpenAI for copyright infringement

"The Center for Investigative Reporting (CIR), the non-profit organisation behind Mother Jones and Reveal, filed a lawsuit against tech giants Microsoft and OpenAI on Thursday, alleging unauthorised use of their copyrighted material to train AI models. This legal action follows similar lawsuits filed by The New York Times and other media organisations...

Interestingly, some media organisations have opted for a different approach, signing licensing deals with OpenAI. These include prominent names like The Associated Press, Axel Springer, the Financial Times, Dotdash Meredith, News Corp, Vox Media, The Atlantic, and Time."

Monday, July 1, 2024

Public roundtable: AI and protections for use of an individual’s name, image, likeness, or reputation; United States Patent and Trademark Office (USPTO), Monday, August 5, 2024 9 AM EDT - 4:30 PM EDT

United States Patent and Trademark Office (USPTO); Public roundtable: AI and protections for use of an individual’s name, image, likeness, or reputation

"The United States Patent and Trademark Office (USPTO) invites members of the public to a roundtable discussion about the intersection of artificial intelligence (AI) technology and legal protections for individuals’ reputations and name, image, voice, likeness (NIL), and other indicators of identity. 

This is your opportunity to provide input on:

  • Whether existing legal protections for individuals’ NIL and reputations are sufficient
  • How these legal protections intersect with other intellectual property (IP) laws
  • How AI technology impacts existing legal protections for NIL and reputation

The feedback received will assist the USPTO’s work to develop IP policy recommendations regarding the intersection of AI and IP, in accordance with the  Executive Order on AI ssued by President Biden in October 2023.

The roundtable will consist of two sessions: A morning in-person session at USPTO headquarters in Alexandria, Virginia, and an afternoon virtual session. Both sessions will be livestreamed."

Following Scarlett Johansson Flap, Intellectual Property Expert Talks AI and the Law; UVAToday, June 24, 2024

Josette Corazza, UVAToday ; Following Scarlett Johansson Flap, Intellectual Property Expert Talks AI and the Law

"UVA School of Law professor Dotan Oliar, an expert in intellectual property who teaches art law and copyright, looked at the OpenAI controversies and how artificial intelligence is raising new legal questions and reviving old debates.

Q. What recourse does an actor have when Open AI claims to use a similar voice, but not the actual voice?

A. The “right of publicity” is the relevant body of intellectual property law available to celebrities who believe their voice (or image or likeness) was misappropriated. This is a state, rather than federal, cause of (legal) action and a right now protected in the majority of states, although the scope of protection is not the same everywhere.

In a similar case from 1988, Ford Motor Co. wanted to use Bette Midler’s voice for a commercial, and just like reportedly happened here, Midler declined. Ford went and hired a “sound-alike” person. In a then-precedential ruling, the 9th Circuit Court decided in Midler’s favor and held that a person’s voice was a protected attribute within their right of publicity."

Internet Archive forced to remove 500,000 books after publishers’ court win; Ars Technica, June 21, 2024

, Ars Technica; Internet Archive forced to remove 500,000 books after publishers’ court win

"As a result of book publishers successfully suing the Internet Archive (IA) last year, the free online library that strives to keep growing online access to books recently shrank by about 500,000 titles.

IA reported in a blog post this month that publishers abruptly forcing these takedowns triggered a "devastating loss" for readers who depend on IA to access books that are otherwise impossible or difficult to access.

To restore access, IA is now appealing, hoping to reverse the prior court's decision by convincing the US Court of Appeals in the Second Circuit that IA's controlled digital lending of its physical books should be considered fair use under copyright law."

Sunday, June 30, 2024

Tech companies battle content creators over use of copyrighted material to train AI models; The Canadian Press via CBC, June 30, 2024

Anja Karadeglija , The Canadian Press via CBC; Tech companies battle content creators over use of copyrighted material to train AI models

"Canadian creators and publishers want the government to do something about the unauthorized and usually unreported use of their content to train generative artificial intelligence systems.

But AI companies maintain that using the material to train their systems doesn't violate copyright, and say limiting its use would stymie the development of AI in Canada.

The two sides are making their cases in recently published submissions to a consultation on copyright and AI being undertaken by the federal government as it considers how Canada's copyright laws should address the emergence of generative AI systems like OpenAI's ChatGPT."

Saturday, June 29, 2024

Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web; The Verge, June 28, 2024

 Sean Hollister, The Verge; Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web

"Microsoft AI boss Mustafa Suleyman incorrectly believes that the moment you publish anything on the open web, it becomes “freeware” that anyone can freely copy and use. 

When CNBC’s Andrew Ross Sorkin asked him whether “AI companies have effectively stolen the world’s IP,” he said:

I think that with respect to content that’s already on the open web, the social contract of that content since the ‘90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been “freeware,” if you like, that’s been the understanding...

I am not a lawyer, but even I can tell you that the moment you create a work, it’s automatically protected by copyright in the US."