Tuesday, July 16, 2024

USPTO issues AI subject matter eligibility guidance; United States Patent and Trademark Office (USPTO), July 16, 2024

United States Patent and Trademark Office (USPTO) ; USPTO issues AI subject matter eligibility guidance

"The U.S. Patent and Trademark Office (USPTO) has issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including in artificial intelligence (AI). This guidance update will assist USPTO personnel and stakeholders in determining subject matter eligibility under patent law (35 § U.S.C. 101) of AI inventions. This latest update builds on previous guidance by providing further clarity and consistency to how the USPTO and applicants should evaluate subject matter eligibility of claims in patent applications and patents involving inventions related to AI technology. The guidance update also announces three new examples of how to apply this guidance throughout a wide range of technologies. 

The guidance update, which goes into effect on July 17, 2024, provides a background on the USPTO’s efforts related to AI and subject matter eligibility, an overview of the USPTO’s patent subject matter eligibility guidance, and additional discussion on certain areas of the guidance that are particularly relevant to AI inventions, including discussions of Federal Circuit decisions on subject matter eligibility. 

“The USPTO remains committed to fostering and protecting innovation in critical and emerging technologies, including AI,” said Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO. “We look forward to hearing public feedback on this guidance update, which will provide further clarity on evaluating subject matter eligibility of AI inventions while incentivizing innovations needed to solve world and community problems.” 

The three new examples provide additional analyses under 35 § U.S.C. 101 of hypothetical claims in certain situations to address particular inquiries, such as whether a claim recites an abstract idea or whether a claim integrates the abstract idea into a practical application. They are intended to assist USPTO personnel in applying the USPTO’s subject matter eligibility guidance to AI inventions during patent examination, appeal, and post-grant proceedings. The examples are available on our AI-related resources webpage and our patent eligibility page on our website.  

The USPTO continues to be directly involved in the development of legal and policy measures related to the impact of AI on all forms of intellectual property. The guidance update delivers on the USPTO’s obligations under the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence o provide guidance to examiners and the public on the impact of AI and issues at the intersection of AI and IP, including patent subject matter eligibility. This follows our announcement earlier this year on Inventorship guidance for AI-assisted inventions, as well as AI guidance for practitioners and a request for comments on the impact of AI on certain patentability considerations, including what qualifies as prior art and the assessment of the level of ordinary skills in the art (comments accepted until July 29, 2024). 

The full text of the guidance update on patent subject matter eligibility is available on our Latest AI news and reports webpageand the corresponding examples are available on our AI-related resources webpage. The USPTO will accept public comments on the guidance update and the examples through September 16, 2024. Please see the Federal Register Notice for instructions on submitting comments."

Record labels sue Verizon for not disconnecting pirates’ Internet service; Ars Technica, July 15, 2024

 , Ars Technica; Record labels sue Verizon for not disconnecting pirates’ Internet service

"Major record labels sued Verizon on Friday, alleging that the Internet service provider violated copyright law by continuing to serve customers accused of pirating music. Verizon "knowingly provides its high-speed service to a massive community of online pirates," said the complaint filed in US District Court for the Southern District of New York.

Universal, Sony, and Warner say they have sent over 340,000 copyright infringement notices to Verizon since early 2020. "Those notices identify specific subscribers on Verizon's network stealing Plaintiffs' sound recordings through peer-to-peer ('P2P') file-sharing networks that are notorious hotbeds for copyright infringement," the lawsuit said."

Monday, July 15, 2024

Holy See welcomes ‘significant’ new treaty on intellectual property; Vatican News, July 10, 2024

 Joseph Tulloch, Vatican News ; Holy See welcomes ‘significant’ new treaty on intellectual property

"Archbishop Ettore Balestrero, the Permanent Observer of the Holy See to the United Nations and Other International Organizations in Geneva, has welcomed a historic new treaty on intellectual property.

In an address to members states of the UN's World Intellectual Property Organisation (WIPO), the Archbishop called the treaty a “significant step forward”.

The treaty


WIPO member states adopted the agreement – which regards “Intellectual Property, Genetic Resources and Associated Traditional Knowledge – in May of this year.

The treaty establishes a new disclosure requirement in international law for patent applicants whose inventions are based on genetic resources and/or associated traditional knowledge.

It was the first WIPO treaty in over a decade, as well as the first to evr deal with the genetic resources and traditional knowledge of indigenous peoples."

Friday, July 12, 2024

AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections; Digiday, July 12, 2024

 Marty Swant , Digiday; AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections

"The U.S. Senate Commerce Committee on Thursday held a hearing to address a range of concerns about the intersection of AI and privacy. While some lawmakers expressed concern about AI accelerating risks – such as online surveillance, scams, hyper-targeting ads and discriminatory business practices — others cautioned regulations might further protect tech giants and burden smaller businesses."

Thursday, July 11, 2024

Religious education group sues Fort Wayne man over copyright claims; The Journal Gazette, July 8, 2024

 , The Journal Gazette; Religious education group sues Fort Wayne man over copyright claims

"LifeWise claims in its lawsuit that Parrish signed up online to volunteer with the hope of publishing information that might damage the organization’s reputation and prompt parents to oppose LifeWise Academy chapters in their communities.

Parrish accessed LifeWise’s information storage systems, downloaded internal documents and posted them along with the LifeWise curriculum on his website, parentsagainstlifewise.online, according to the lawsuit. It said Parrish also posted links to the curriculum on the Facebook group.

“He improperly obtained our entire copyright protected curriculum, and he posted to his website without our permission,” LifeWise said in a statement Monday.

LifeWise tried to get Parrish to voluntarily remove its curriculum, but the complaint said the organization’s efforts – including an attorney’s cease-and-desist letter and social media messages the chief operating officer sent him – were unsuccessful.

The lawsuit said Parrish responded to the letter with a meme stating, “It’s called fair use (expletive).”

LifeWise disagrees. In its statement, the organization said its curriculum is licensed through a publisher called LifeWay, and anyone is welcome to purchase the LifeWay curriculum through its website.

“Posting the entire curriculum is not ‘fair use,’ and we are confident that the judge will agree,” LifeWise said Monday."

Wednesday, July 10, 2024

Considering the Ethics of AI Assistants; Tech Policy Press, July 7, 2024

 JUSTIN HENDRIX , Tech Policy Press ; Considering the Ethics of AI Assistants

"Just a couple of weeks before Pichai took the stage, in April, Google DeepMind published a paper that boasts 57 authors, including experts from a range of disciplines from different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.”"

Tuesday, July 9, 2024

Record labels sue AI music startups for copyright infringement; WBUR Here & Now, July 8, 2024

 WBUR Here & Now; Record labels sue AI music startups for copyright infringement

"Major record labels including Sony, Universal Music Group and Warner are suing two music startups that use artificial intelligence. The labels say Suno and Udio rely on mass copyright infringement, echoing similar complaints from authors, publishers and artists who argue that generative AI infringes on copyright.

Here & Now's Lisa Mullins discusses the cases with Ina Fried, chief technology correspondent for Axios."

Monday, July 8, 2024

China is the runaway leader in generative AI patent applications followed by the US, the UN says; AP, July 3, 2024

JAMEY KEATEN, AP;  China is the runaway leader in generative AI patent applications followed by the US, the UN says

"China has requested far more patents than any other country when it comes to generative AI, the U.N. intellectual property agency said Wednesday, with the United States a distant second...

The new report on patents, the first of its kind, aims to track patent applications as a possible indication of trends in artificial intelligence. It focuses only on generative AI and excludes artificial intelligence more broadly, which includes technologies like facial recognition or autonomous driving."

Five Questions to Ask Before Implementing Generative AI; Markkula Center for Applied Ethics at Santa Clara University, July 3, 2024

Ann Skeet, Markkula Center for Applied Ethics at Santa Clara University ; Five Questions to Ask Before Implementing Generative AI

"While you don’t want to get too far into the weeds, you can ask for the sources of data that the system is being trained on, says Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics and coauthor of Ethics in the Age of Disruptive Technologies: An Operational Roadmap. “[Directors] can also advise proactively choosing an AI system that has an identifiable training data set.”"

Sunday, July 7, 2024

ChatGPT's Owner Asks For Proof That 'New York Times' Articles Are Original. Here's the Media Company's Response.; Entrepreneur, July 5, 2024

SHERIN SHIBU EDITED BY JESSICA THOMAS , Entrepreneur; ChatGPT's Owner Asks For Proof That 'New York Times' Articles Are Original. Here's the Media Company's Response.

"The New York Times sued OpenAI in December for copyright infringement, alleging that the AI giant used millions of its articles to train ChatGPT. Now OpenAI is asking for proof that NYT articles are actually original — and the NYT is calling the request "irrelevant, improper, and harassing."

OpenAI lawyers filed the request in New York district court on Monday. The ChatGPT-maker asked that the Times bring forth documents proving which portions of the millions of articles in question in the case "are original to the Times and what are not...

The NYT submitted a filing on Wednesday asserting that OpenAI's request "serves no purpose other than harassment and retaliation for The Times's decision to file this lawsuit.""

Saturday, July 6, 2024

THE GREAT SCRAPE: THE CLASH BETWEEN SCRAPING AND PRIVACY; SSRN, July 3, 2024

Daniel J. SoloveGeorge Washington University Law School; Woodrow HartzogBoston University School of Law; Stanford Law School Center for Internet and SocietyTHE GREAT SCRAPETHE CLASH BETWEEN SCRAPING AND PRIVACY

"ABSTRACT

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.


Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. With scraping, data protection laws built around

these requirements are ignored.


Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others.


This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation."

Friday, July 5, 2024

A.I. ‘Friend’ for Public School Students Falls Flat; The New York Times, July 1, 2024

Dana Goldstein, The New York Times; A.I. ‘Friend’ for Public School Students Falls Flat

"A.I. companies are heavily marketing themselves to schools, which spend tens of billions of dollars annually on technology. But AllHere’s sudden breakdown illustrates some of the risks of investing taxpayer dollars in artificial intelligence, a technology with enormous potential but little track record, especially when it comes to children. There are many complicated issues at play, including privacy of student data and the accuracy of any information offered via chatbots. And A.I. may also run counter to another growing interest for education leaders and parents — reducing children’s screen time."

Wednesday, July 3, 2024

Starbucks just sued this weed business for copying its logo; Fast Company, July 2, 2024

 GRACE SNELLING, Fast Company; Starbucks just sued this weed business for copying its logo

"On the outside of its repurposed food truck, the NYC-based mobile marijuana retailer Starbuds Flowers features a logo that’s hard to miss. The familiar graphic shows a woman with a pointy crown and long wavy hair descending over her chest, surrounded by a green ring that contains the business name. In her right hand, the woman is puffing on what appears to be a lit blunt, while marijuana leaves frame the whole composition. The logo is clearly a parody of Starbucks’ Siren mascot—but the coffee giant isn’t laughing. 

On June 28, Starbucks filed a copyright and trademark infringement lawsuit against Starbuds in the U.S. District Court for the Southern District of New York."

Tuesday, July 2, 2024

Center for Investigative Reporting sues Microsoft and OpenAI for copyright infringement; Business Today, June 29, 2024

 Pranav Dixit, Business Today; Center for Investigative Reporting sues Microsoft and OpenAI for copyright infringement

"The Center for Investigative Reporting (CIR), the non-profit organisation behind Mother Jones and Reveal, filed a lawsuit against tech giants Microsoft and OpenAI on Thursday, alleging unauthorised use of their copyrighted material to train AI models. This legal action follows similar lawsuits filed by The New York Times and other media organisations...

Interestingly, some media organisations have opted for a different approach, signing licensing deals with OpenAI. These include prominent names like The Associated Press, Axel Springer, the Financial Times, Dotdash Meredith, News Corp, Vox Media, The Atlantic, and Time."

Monday, July 1, 2024

Public roundtable: AI and protections for use of an individual’s name, image, likeness, or reputation; United States Patent and Trademark Office (USPTO), Monday, August 5, 2024 9 AM EDT - 4:30 PM EDT

United States Patent and Trademark Office (USPTO); Public roundtable: AI and protections for use of an individual’s name, image, likeness, or reputation

"The United States Patent and Trademark Office (USPTO) invites members of the public to a roundtable discussion about the intersection of artificial intelligence (AI) technology and legal protections for individuals’ reputations and name, image, voice, likeness (NIL), and other indicators of identity. 

This is your opportunity to provide input on:

  • Whether existing legal protections for individuals’ NIL and reputations are sufficient
  • How these legal protections intersect with other intellectual property (IP) laws
  • How AI technology impacts existing legal protections for NIL and reputation

The feedback received will assist the USPTO’s work to develop IP policy recommendations regarding the intersection of AI and IP, in accordance with the  Executive Order on AI ssued by President Biden in October 2023.

The roundtable will consist of two sessions: A morning in-person session at USPTO headquarters in Alexandria, Virginia, and an afternoon virtual session. Both sessions will be livestreamed."

Following Scarlett Johansson Flap, Intellectual Property Expert Talks AI and the Law; UVAToday, June 24, 2024

Josette Corazza, UVAToday ; Following Scarlett Johansson Flap, Intellectual Property Expert Talks AI and the Law

"UVA School of Law professor Dotan Oliar, an expert in intellectual property who teaches art law and copyright, looked at the OpenAI controversies and how artificial intelligence is raising new legal questions and reviving old debates.

Q. What recourse does an actor have when Open AI claims to use a similar voice, but not the actual voice?

A. The “right of publicity” is the relevant body of intellectual property law available to celebrities who believe their voice (or image or likeness) was misappropriated. This is a state, rather than federal, cause of (legal) action and a right now protected in the majority of states, although the scope of protection is not the same everywhere.

In a similar case from 1988, Ford Motor Co. wanted to use Bette Midler’s voice for a commercial, and just like reportedly happened here, Midler declined. Ford went and hired a “sound-alike” person. In a then-precedential ruling, the 9th Circuit Court decided in Midler’s favor and held that a person’s voice was a protected attribute within their right of publicity."

Internet Archive forced to remove 500,000 books after publishers’ court win; Ars Technica, June 21, 2024

, Ars Technica; Internet Archive forced to remove 500,000 books after publishers’ court win

"As a result of book publishers successfully suing the Internet Archive (IA) last year, the free online library that strives to keep growing online access to books recently shrank by about 500,000 titles.

IA reported in a blog post this month that publishers abruptly forcing these takedowns triggered a "devastating loss" for readers who depend on IA to access books that are otherwise impossible or difficult to access.

To restore access, IA is now appealing, hoping to reverse the prior court's decision by convincing the US Court of Appeals in the Second Circuit that IA's controlled digital lending of its physical books should be considered fair use under copyright law."

Sunday, June 30, 2024

Tech companies battle content creators over use of copyrighted material to train AI models; The Canadian Press via CBC, June 30, 2024

Anja Karadeglija , The Canadian Press via CBC; Tech companies battle content creators over use of copyrighted material to train AI models

"Canadian creators and publishers want the government to do something about the unauthorized and usually unreported use of their content to train generative artificial intelligence systems.

But AI companies maintain that using the material to train their systems doesn't violate copyright, and say limiting its use would stymie the development of AI in Canada.

The two sides are making their cases in recently published submissions to a consultation on copyright and AI being undertaken by the federal government as it considers how Canada's copyright laws should address the emergence of generative AI systems like OpenAI's ChatGPT."

Saturday, June 29, 2024

Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web; The Verge, June 28, 2024

 Sean Hollister, The Verge; Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web

"Microsoft AI boss Mustafa Suleyman incorrectly believes that the moment you publish anything on the open web, it becomes “freeware” that anyone can freely copy and use. 

When CNBC’s Andrew Ross Sorkin asked him whether “AI companies have effectively stolen the world’s IP,” he said:

I think that with respect to content that’s already on the open web, the social contract of that content since the ‘90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been “freeware,” if you like, that’s been the understanding...

I am not a lawyer, but even I can tell you that the moment you create a work, it’s automatically protected by copyright in the US." 

 

AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’; The Guardian, June 29, 2024

 Zoe Corbin, The Guardian; AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’

"The American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer – and some of his predictions no longer seem so wacky. Kurzweil’s day job is principal researcher and AI visionary at Google. He spoke to the Observer in his personal capacity as an author, inventor and futurist...

What of the existential risk of advanced AI systems – that they could gain unanticipated powers and seriously harm humanity? AI “godfather” Geoffrey Hinton left Google last year, in part because of such concerns, while other high-profile tech leaders such as Elon Musk have also issued warnings. Earlier this month, OpenAI and Google DeepMind workers called for greater protections for whistleblowers who raise safety concerns. 

I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive...

Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? 

Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time...

The book looks in detail at AI’s job-killing potential. Should we be worried? 

Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.

There are other alarming ways, beyond job loss, that AI is promising to transform the world: spreading disinformation, causing harm through biased algorithms and supercharging surveillance. You don’t dwell much on those… 

We do have to work through certain types of issues. We have an election coming and “deepfake” videos are a worry. I think we can actually figure out [what’s fake] but if it happens right before the election we won’t have time. On issues of bias, AI is learning from humans and humans have bias. We’re making progress but we’re not where we want to be. There are also issues around fair data use by AI that need to be sorted out via the legal process."

2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work; Thomson Reuters Institute, 2024

 Thomson Reuters Institute; 2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work

"Inaccuracy, privacy worries persist -- More than half of respondents identified such worries as inaccurate responses (70%); data security (68%); privacy and confidentiality of data (62%); complying with laws and regulations (60%); and ethical and responsible usage (57%), as primary concerns for GenAI."

GenAI in focus: Understanding the latest trends and considerations; Thomson Reuters, June 27, 2024

Thomson Reuters; GenAI in focus: Understanding the latest trends and considerations

"Legal professionals, whether they work for law firms, corporate legal departments, government, or in risk and fraud, have generally positive perceptions of generative AI (GenAI). According to the professionals surveyed in the Thomson Reuters Institute’s 2024 GenAI in Professional Services report, 85% of law firm and corporate attorneys, 77% of government legal practitioners, and 82% of corporate risk professionals believe that GenAI can be applied to industry work.  

But should it be applied? There, those positive perceptions softened a bit, with 51% of law firm respondents, 60% of corporate legal practitioners, 62% of corporate risk professionals, and 40% of government legal respondents saying yes.  

In short, professionals’ perceptions of AI include concerns and interest in its capabilities. Those concerns include the ethics of AI usage and mitigating related risks. These are important considerations. But they don’t need to keep professionals from benefiting from all that GenAI can do. Professionals can minimize many of the potential risks by becoming familiar with responsible AI practices."

Friday, June 28, 2024

Native Hawaiian Intellectual Property Working Group convenes inaugural meeting; The Maui News, June 27, 2024

The Maui News; Native Hawaiian Intellectual Property Working Group convenes inaugural meeting

"On June 24, the Native Hawaiian Intellectual Property Working Group (NHIPWG) comprised of experts in native Hawaiian law, indigenous intellectual property, and cultural practitioners held its first meeting at the Hawai’i State Capitol.

The working group was established following the adoption of HCR108 by the House of Representatives in 2023. This resolution urged the creation of a nine-member working group to study policies and legislation concerning native Hawaiian intellectual property. Representative Darius K. Kila, who introduced HCR108, explained that the formation of the group represents efforts to protect the intellectual property rights of Kānaka Maoli, as well as their cultural expressions, language, and art form."

Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix; Fortune, June 27, 2024

 , Fortune; Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix

"The ethics of generative AI has been in the news this week. AI companies have been accused of taking copyrighted creative works without permission to train their models, and there’s been documentation of those models producing outputs that plagiarize from that training data. Today, I’m going to make the case that generative AI can never be ethical as long as three issues that are currently inherent to the technology remain. First, there’s the fact that generative AI was created using stolen data. Second, it’s built on exploitative labor. And third, it’s exponentially worsening the energy crisis at a pivotal time when we need to be scaling back, not accelerating, our energy demands and environmental impact."

Tuesday, June 25, 2024

As AI Marches on Music Industry, Record Labels Counter With $350M Lawsuits In Assertive Move; The Hollywood Reporter, June 24, 2024

 Winston Cho, The Hollywood Reporter; As AI Marches on Music Industry, Record Labels Counter With $350M Lawsuits In Assertive Move

"There’ve been over 25 copyright infringement lawsuits filed against artificial intelligence companies from most corners of the creative industries. Groups that’ve sued include artists, authors and news publishers, among various others alleging mass theft of their copyrighted works to teach AI systems. Of those entities, record companies have taken the most aggressive stance thus far in trying to get out in front of the technology."

Monday, June 24, 2024

AI: World's biggest music labels sue over copyright; BBC News, June 24, 2024

Natalie Sherman , BBC News; AI: World's biggest music labels sue over copyright

"The world's biggest record labels are suing two artificial intelligence (AI) start-ups over alleged copyright violation in a potentially landmark case.

Firms including Sony Music, Universal Music Group and Warner Records say Suno and Udio have committed copyright infringement on an "almost unimaginable scale".

They claim the pair's software steals music to "spit out" similar work and ask for compensation of $150,000 per work."

How to Fix “AI’s Original Sin”; O'Reilly, June 18, 2024

 Tim O’Reilly, O'Reilly; How to Fix “AI’s Original Sin”

"In conversation with reporter Cade Metz, who broke the story, on the New York Times podcast The Daily, host Michael Barbaro called copyright violation “AI’s Original Sin.”

At the very least, copyright appears to be one of the major fronts so far in the war over who gets to profit from generative AI. It’s not at all clear yet who is on the right side of the law. In the remarkable essay “Talkin’ Bout AI Generation: Copyright and the Generative-AI Supply Chain,” Cornell’s Katherine Lee and A. Feder Cooper and James Grimmelmann of Microsoft Research and Yale note:

Copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation, because there are connections everywhere. Whether the output of a generative AI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply.

But it seems less important to get into the fine points of copyright law and arguments over liability for infringement, and instead to explore the political economy of copyrighted content in the emerging world of AI services: Who will get what, and why?"

Sunday, June 23, 2024

Intellectual property and entrepreneurship resources for the military community; United States Patent and Trademark Office (USPTO), May 31, 2024

United States Patent and Trademark Office (USPTO); Intellectual property and entrepreneurship resources for the military community

"Earlier this month at Fort Buchanan, Puerto Rico, an Army veteran and business owner said he wished this valuable USPTO program had been around when he started his business.

Entrepreneurship Essentials Roadshows are part of the From Service to Success program and reflect the USPTO’s mission of inclusive innovation, meeting potential entrepreneurs and small business owners where they are with targeted programming. Roadshows visit military bases worldwide and help by:

  • Providing encouragement from military leadership.
  • Sharing tips from experts on obtaining funding, identifying markets, writing and executing business plans, and hearing from other entrepreneurs.
  • Offering practical information to protect valuable innovations.
  • Networking with other entrepreneurs."

Saturday, June 22, 2024

NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’; Los Angeles Times, June 21, 2024

 Samantha Masunaga , Los Angeles Times; NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’

"Artificial intelligence is “exciting,” but guardrails must be put in place to protect labor, intellectual property and ethics, NBCUniversal Studio Group Chairman Donna Langley said Friday at an entertainment industry law conference.

During a wide-ranging, on-stage conversation at the UCLA Entertainment Symposium, the media chief emphasized that first, “the labor piece of it has to be right,” a proclamation that was met with applause from the audience. 

“Nor should we infringe on people’s rights,” she said, adding that there also needs to be “very good, clever, sophisticated copyright laws around our IP.”...

AI has emerged as a major issue in Hollywood, as technology companies have increasingly courted studios and industry players. But it is a delicate dance, as entertainment industry executives want to avoid offending actors, writers and other workers who view the technology as a threat to their jobs."