Sunday, July 7, 2024

ChatGPT's Owner Asks For Proof That 'New York Times' Articles Are Original. Here's the Media Company's Response.; Entrepreneur, July 5, 2024

SHERIN SHIBU EDITED BY JESSICA THOMAS , Entrepreneur; ChatGPT's Owner Asks For Proof That 'New York Times' Articles Are Original. Here's the Media Company's Response.

"The New York Times sued OpenAI in December for copyright infringement, alleging that the AI giant used millions of its articles to train ChatGPT. Now OpenAI is asking for proof that NYT articles are actually original — and the NYT is calling the request "irrelevant, improper, and harassing."

OpenAI lawyers filed the request in New York district court on Monday. The ChatGPT-maker asked that the Times bring forth documents proving which portions of the millions of articles in question in the case "are original to the Times and what are not...

The NYT submitted a filing on Wednesday asserting that OpenAI's request "serves no purpose other than harassment and retaliation for The Times's decision to file this lawsuit.""

Saturday, July 6, 2024

THE GREAT SCRAPE: THE CLASH BETWEEN SCRAPING AND PRIVACY; SSRN, July 3, 2024

Daniel J. SoloveGeorge Washington University Law School; Woodrow HartzogBoston University School of Law; Stanford Law School Center for Internet and SocietyTHE GREAT SCRAPETHE CLASH BETWEEN SCRAPING AND PRIVACY

"ABSTRACT

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.


Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. With scraping, data protection laws built around

these requirements are ignored.


Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others.


This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation."

Friday, July 5, 2024

A.I. ‘Friend’ for Public School Students Falls Flat; The New York Times, July 1, 2024

Dana Goldstein, The New York Times; A.I. ‘Friend’ for Public School Students Falls Flat

"A.I. companies are heavily marketing themselves to schools, which spend tens of billions of dollars annually on technology. But AllHere’s sudden breakdown illustrates some of the risks of investing taxpayer dollars in artificial intelligence, a technology with enormous potential but little track record, especially when it comes to children. There are many complicated issues at play, including privacy of student data and the accuracy of any information offered via chatbots. And A.I. may also run counter to another growing interest for education leaders and parents — reducing children’s screen time."

Wednesday, July 3, 2024

Starbucks just sued this weed business for copying its logo; Fast Company, July 2, 2024

 GRACE SNELLING, Fast Company; Starbucks just sued this weed business for copying its logo

"On the outside of its repurposed food truck, the NYC-based mobile marijuana retailer Starbuds Flowers features a logo that’s hard to miss. The familiar graphic shows a woman with a pointy crown and long wavy hair descending over her chest, surrounded by a green ring that contains the business name. In her right hand, the woman is puffing on what appears to be a lit blunt, while marijuana leaves frame the whole composition. The logo is clearly a parody of Starbucks’ Siren mascot—but the coffee giant isn’t laughing. 

On June 28, Starbucks filed a copyright and trademark infringement lawsuit against Starbuds in the U.S. District Court for the Southern District of New York."

Tuesday, July 2, 2024

Center for Investigative Reporting sues Microsoft and OpenAI for copyright infringement; Business Today, June 29, 2024

 Pranav Dixit, Business Today; Center for Investigative Reporting sues Microsoft and OpenAI for copyright infringement

"The Center for Investigative Reporting (CIR), the non-profit organisation behind Mother Jones and Reveal, filed a lawsuit against tech giants Microsoft and OpenAI on Thursday, alleging unauthorised use of their copyrighted material to train AI models. This legal action follows similar lawsuits filed by The New York Times and other media organisations...

Interestingly, some media organisations have opted for a different approach, signing licensing deals with OpenAI. These include prominent names like The Associated Press, Axel Springer, the Financial Times, Dotdash Meredith, News Corp, Vox Media, The Atlantic, and Time."

Monday, July 1, 2024

Public roundtable: AI and protections for use of an individual’s name, image, likeness, or reputation; United States Patent and Trademark Office (USPTO), Monday, August 5, 2024 9 AM EDT - 4:30 PM EDT

United States Patent and Trademark Office (USPTO); Public roundtable: AI and protections for use of an individual’s name, image, likeness, or reputation

"The United States Patent and Trademark Office (USPTO) invites members of the public to a roundtable discussion about the intersection of artificial intelligence (AI) technology and legal protections for individuals’ reputations and name, image, voice, likeness (NIL), and other indicators of identity. 

This is your opportunity to provide input on:

  • Whether existing legal protections for individuals’ NIL and reputations are sufficient
  • How these legal protections intersect with other intellectual property (IP) laws
  • How AI technology impacts existing legal protections for NIL and reputation

The feedback received will assist the USPTO’s work to develop IP policy recommendations regarding the intersection of AI and IP, in accordance with the  Executive Order on AI ssued by President Biden in October 2023.

The roundtable will consist of two sessions: A morning in-person session at USPTO headquarters in Alexandria, Virginia, and an afternoon virtual session. Both sessions will be livestreamed."

Following Scarlett Johansson Flap, Intellectual Property Expert Talks AI and the Law; UVAToday, June 24, 2024

Josette Corazza, UVAToday ; Following Scarlett Johansson Flap, Intellectual Property Expert Talks AI and the Law

"UVA School of Law professor Dotan Oliar, an expert in intellectual property who teaches art law and copyright, looked at the OpenAI controversies and how artificial intelligence is raising new legal questions and reviving old debates.

Q. What recourse does an actor have when Open AI claims to use a similar voice, but not the actual voice?

A. The “right of publicity” is the relevant body of intellectual property law available to celebrities who believe their voice (or image or likeness) was misappropriated. This is a state, rather than federal, cause of (legal) action and a right now protected in the majority of states, although the scope of protection is not the same everywhere.

In a similar case from 1988, Ford Motor Co. wanted to use Bette Midler’s voice for a commercial, and just like reportedly happened here, Midler declined. Ford went and hired a “sound-alike” person. In a then-precedential ruling, the 9th Circuit Court decided in Midler’s favor and held that a person’s voice was a protected attribute within their right of publicity."

Internet Archive forced to remove 500,000 books after publishers’ court win; Ars Technica, June 21, 2024

, Ars Technica; Internet Archive forced to remove 500,000 books after publishers’ court win

"As a result of book publishers successfully suing the Internet Archive (IA) last year, the free online library that strives to keep growing online access to books recently shrank by about 500,000 titles.

IA reported in a blog post this month that publishers abruptly forcing these takedowns triggered a "devastating loss" for readers who depend on IA to access books that are otherwise impossible or difficult to access.

To restore access, IA is now appealing, hoping to reverse the prior court's decision by convincing the US Court of Appeals in the Second Circuit that IA's controlled digital lending of its physical books should be considered fair use under copyright law."

Sunday, June 30, 2024

Tech companies battle content creators over use of copyrighted material to train AI models; The Canadian Press via CBC, June 30, 2024

Anja Karadeglija , The Canadian Press via CBC; Tech companies battle content creators over use of copyrighted material to train AI models

"Canadian creators and publishers want the government to do something about the unauthorized and usually unreported use of their content to train generative artificial intelligence systems.

But AI companies maintain that using the material to train their systems doesn't violate copyright, and say limiting its use would stymie the development of AI in Canada.

The two sides are making their cases in recently published submissions to a consultation on copyright and AI being undertaken by the federal government as it considers how Canada's copyright laws should address the emergence of generative AI systems like OpenAI's ChatGPT."

Saturday, June 29, 2024

Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web; The Verge, June 28, 2024

 Sean Hollister, The Verge; Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web

"Microsoft AI boss Mustafa Suleyman incorrectly believes that the moment you publish anything on the open web, it becomes “freeware” that anyone can freely copy and use. 

When CNBC’s Andrew Ross Sorkin asked him whether “AI companies have effectively stolen the world’s IP,” he said:

I think that with respect to content that’s already on the open web, the social contract of that content since the ‘90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been “freeware,” if you like, that’s been the understanding...

I am not a lawyer, but even I can tell you that the moment you create a work, it’s automatically protected by copyright in the US." 

 

AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’; The Guardian, June 29, 2024

 Zoe Corbin, The Guardian; AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’

"The American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer – and some of his predictions no longer seem so wacky. Kurzweil’s day job is principal researcher and AI visionary at Google. He spoke to the Observer in his personal capacity as an author, inventor and futurist...

What of the existential risk of advanced AI systems – that they could gain unanticipated powers and seriously harm humanity? AI “godfather” Geoffrey Hinton left Google last year, in part because of such concerns, while other high-profile tech leaders such as Elon Musk have also issued warnings. Earlier this month, OpenAI and Google DeepMind workers called for greater protections for whistleblowers who raise safety concerns. 

I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive...

Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? 

Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time...

The book looks in detail at AI’s job-killing potential. Should we be worried? 

Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.

There are other alarming ways, beyond job loss, that AI is promising to transform the world: spreading disinformation, causing harm through biased algorithms and supercharging surveillance. You don’t dwell much on those… 

We do have to work through certain types of issues. We have an election coming and “deepfake” videos are a worry. I think we can actually figure out [what’s fake] but if it happens right before the election we won’t have time. On issues of bias, AI is learning from humans and humans have bias. We’re making progress but we’re not where we want to be. There are also issues around fair data use by AI that need to be sorted out via the legal process."

2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work; Thomson Reuters Institute, 2024

 Thomson Reuters Institute; 2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work

"Inaccuracy, privacy worries persist -- More than half of respondents identified such worries as inaccurate responses (70%); data security (68%); privacy and confidentiality of data (62%); complying with laws and regulations (60%); and ethical and responsible usage (57%), as primary concerns for GenAI."

GenAI in focus: Understanding the latest trends and considerations; Thomson Reuters, June 27, 2024

Thomson Reuters; GenAI in focus: Understanding the latest trends and considerations

"Legal professionals, whether they work for law firms, corporate legal departments, government, or in risk and fraud, have generally positive perceptions of generative AI (GenAI). According to the professionals surveyed in the Thomson Reuters Institute’s 2024 GenAI in Professional Services report, 85% of law firm and corporate attorneys, 77% of government legal practitioners, and 82% of corporate risk professionals believe that GenAI can be applied to industry work.  

But should it be applied? There, those positive perceptions softened a bit, with 51% of law firm respondents, 60% of corporate legal practitioners, 62% of corporate risk professionals, and 40% of government legal respondents saying yes.  

In short, professionals’ perceptions of AI include concerns and interest in its capabilities. Those concerns include the ethics of AI usage and mitigating related risks. These are important considerations. But they don’t need to keep professionals from benefiting from all that GenAI can do. Professionals can minimize many of the potential risks by becoming familiar with responsible AI practices."

Friday, June 28, 2024

Native Hawaiian Intellectual Property Working Group convenes inaugural meeting; The Maui News, June 27, 2024

The Maui News; Native Hawaiian Intellectual Property Working Group convenes inaugural meeting

"On June 24, the Native Hawaiian Intellectual Property Working Group (NHIPWG) comprised of experts in native Hawaiian law, indigenous intellectual property, and cultural practitioners held its first meeting at the Hawai’i State Capitol.

The working group was established following the adoption of HCR108 by the House of Representatives in 2023. This resolution urged the creation of a nine-member working group to study policies and legislation concerning native Hawaiian intellectual property. Representative Darius K. Kila, who introduced HCR108, explained that the formation of the group represents efforts to protect the intellectual property rights of Kānaka Maoli, as well as their cultural expressions, language, and art form."

Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix; Fortune, June 27, 2024

 , Fortune; Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix

"The ethics of generative AI has been in the news this week. AI companies have been accused of taking copyrighted creative works without permission to train their models, and there’s been documentation of those models producing outputs that plagiarize from that training data. Today, I’m going to make the case that generative AI can never be ethical as long as three issues that are currently inherent to the technology remain. First, there’s the fact that generative AI was created using stolen data. Second, it’s built on exploitative labor. And third, it’s exponentially worsening the energy crisis at a pivotal time when we need to be scaling back, not accelerating, our energy demands and environmental impact."

Tuesday, June 25, 2024

As AI Marches on Music Industry, Record Labels Counter With $350M Lawsuits In Assertive Move; The Hollywood Reporter, June 24, 2024

 Winston Cho, The Hollywood Reporter; As AI Marches on Music Industry, Record Labels Counter With $350M Lawsuits In Assertive Move

"There’ve been over 25 copyright infringement lawsuits filed against artificial intelligence companies from most corners of the creative industries. Groups that’ve sued include artists, authors and news publishers, among various others alleging mass theft of their copyrighted works to teach AI systems. Of those entities, record companies have taken the most aggressive stance thus far in trying to get out in front of the technology."

Monday, June 24, 2024

AI: World's biggest music labels sue over copyright; BBC News, June 24, 2024

Natalie Sherman , BBC News; AI: World's biggest music labels sue over copyright

"The world's biggest record labels are suing two artificial intelligence (AI) start-ups over alleged copyright violation in a potentially landmark case.

Firms including Sony Music, Universal Music Group and Warner Records say Suno and Udio have committed copyright infringement on an "almost unimaginable scale".

They claim the pair's software steals music to "spit out" similar work and ask for compensation of $150,000 per work."

How to Fix “AI’s Original Sin”; O'Reilly, June 18, 2024

 Tim O’Reilly, O'Reilly; How to Fix “AI’s Original Sin”

"In conversation with reporter Cade Metz, who broke the story, on the New York Times podcast The Daily, host Michael Barbaro called copyright violation “AI’s Original Sin.”

At the very least, copyright appears to be one of the major fronts so far in the war over who gets to profit from generative AI. It’s not at all clear yet who is on the right side of the law. In the remarkable essay “Talkin’ Bout AI Generation: Copyright and the Generative-AI Supply Chain,” Cornell’s Katherine Lee and A. Feder Cooper and James Grimmelmann of Microsoft Research and Yale note:

Copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation, because there are connections everywhere. Whether the output of a generative AI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply.

But it seems less important to get into the fine points of copyright law and arguments over liability for infringement, and instead to explore the political economy of copyrighted content in the emerging world of AI services: Who will get what, and why?"

Sunday, June 23, 2024

Intellectual property and entrepreneurship resources for the military community; United States Patent and Trademark Office (USPTO), May 31, 2024

United States Patent and Trademark Office (USPTO); Intellectual property and entrepreneurship resources for the military community

"Earlier this month at Fort Buchanan, Puerto Rico, an Army veteran and business owner said he wished this valuable USPTO program had been around when he started his business.

Entrepreneurship Essentials Roadshows are part of the From Service to Success program and reflect the USPTO’s mission of inclusive innovation, meeting potential entrepreneurs and small business owners where they are with targeted programming. Roadshows visit military bases worldwide and help by:

  • Providing encouragement from military leadership.
  • Sharing tips from experts on obtaining funding, identifying markets, writing and executing business plans, and hearing from other entrepreneurs.
  • Offering practical information to protect valuable innovations.
  • Networking with other entrepreneurs."

Saturday, June 22, 2024

NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’; Los Angeles Times, June 21, 2024

 Samantha Masunaga , Los Angeles Times; NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’

"Artificial intelligence is “exciting,” but guardrails must be put in place to protect labor, intellectual property and ethics, NBCUniversal Studio Group Chairman Donna Langley said Friday at an entertainment industry law conference.

During a wide-ranging, on-stage conversation at the UCLA Entertainment Symposium, the media chief emphasized that first, “the labor piece of it has to be right,” a proclamation that was met with applause from the audience. 

“Nor should we infringe on people’s rights,” she said, adding that there also needs to be “very good, clever, sophisticated copyright laws around our IP.”...

AI has emerged as a major issue in Hollywood, as technology companies have increasingly courted studios and industry players. But it is a delicate dance, as entertainment industry executives want to avoid offending actors, writers and other workers who view the technology as a threat to their jobs."

AI lab at Christian university aims to bring morality and ethics to artificial intelligence; Fox News, June 17, 2024

 Christine Rousselle  , Fox News; AI lab at Christian university aims to bring morality and ethics to artificial intelligence

"A new AI Lab at a Christian university in California is grounded in theological values — something the school hopes will help to prevent Christians and others of faith from falling behind when it comes to this new technology.

"The AI Lab at Biola University is a dedicated space where students, faculty and staff converge to explore the intricacies of artificial intelligence," Dr. Michael J. Arena told Fox News Digital...

The lab is meant to "be a crucible for shaping the future of AI," Arena said via email, noting the lab aims to do this by "providing education, fostering dialogue and leading innovative AI projects rooted in Christian beliefs." 

While AI has been controversial, Arena believes that educational institutions have to "embrace AI or risk falling behind" in technology. 

"If we don't engage, we risk falling asleep at the wheel," Arena said, referring to Christian and faith-centered institutions. 

He pointed to social media as an example of how a failure to properly engage with an emerging technology with a strong approach to moral values has had disastrous results."

Oxford University institute hosts AI ethics conference; Oxford Mail, June 21, 2024

 Jacob Manuschka , Oxford Mail; Oxford University institute hosts AI ethics conference

"On June 20, 'The Lyceum Project: AI Ethics with Aristotle' explored the ethical regulation of AI.

This conference, set adjacent to the ancient site of Aristotle’s school, showcased some of the greatest philosophical minds and featured an address from Greek prime minister, Kyriakos Mitsotakis.

Professor John Tasioulas, director of the Institute for Ethics in AI, said: "The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI.

"We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges...

The conference was held in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research."

Friday, June 21, 2024

Using AI to Create Content? Watch Out for Copyright Violations; Chicago Business Attorney Blog, June 20, 2024

 , Chicago Business Attorney Blog; Using AI to Create Content? Watch Out for Copyright Violations

"Businesses using generative AI programs like ChatGPT to create any content—whether for blogs, websites or other marketing materials, and whether text, visuals, sound or video—need to ensure that they’re not inadvertently using copyrighted materials in the process.

Clearly, the times they are a changing….and businesses need to adapt to the changes.  Employers should promulgate messages to their employees and contractors updating their policy manuals to ensure that communications professionals and others crafting content are aware of the risks of using AI-generated materials, which go beyond the possibility that they are “hallucinated” rather than factual—although that’s worth considering, too."

Tuesday, June 18, 2024

POPE FRANCIS ATTENDS THE G7 SESSION ON ARTIFICIAL INTELLIGENCE: ADDRESS OF HIS HOLINESS POPE FRANCIS, June 14, 2024

 The Vatican, POPE FRANCIS ATTENDS THE G7 SESSION ON ARTIFICIAL INTELLIGENCE: 

ADDRESS OF HIS HOLINESS POPE FRANCISBorgo Egnazia (Puglia)

[Excerpt]

            "An exciting and fearsome tool


 Esteemed ladies and gentlemen,

I address you today, the leaders of the Intergovernmental Forum of the G7, concerning the effects of artificial intelligence on the future of humanity.

“Sacred Scripture attests that God bestowed his Spirit upon human beings so that they might have ‘skill and understanding and knowledge in every craft’ ( Ex 35:31)”. [1] Science and technology are therefore brilliant products of the creative potential of human beings. [2]

Indeed, artificial intelligence arises precisely from the use of this God-given creative potential.

As we know, artificial intelligence is an extremely powerful tool, employed in many kinds of human activity: from medicine to the world of work; from culture to the field of communications; from education to politics. It is now safe to assume that its use will increasingly influence the way we live, our social relationships and even the way we conceive of our identity as human beings. [3]

The question of artificial intelligence, however, is often perceived as ambiguous: on the one hand, it generates excitement for the possibilities it offers, while on the other it gives rise to fear for the consequences it foreshadows. In this regard, we could say that all of us, albeit to varying degrees, experience two emotions: we are enthusiastic when we imagine the advances that can result from artificial intelligence but, at the same time, we are fearful when we acknowledge the dangers inherent in its use. [4]"

‘Trump Too Small’ Trademark Case Morphs Into Free Speech Debate; Bloomberg Law, June 18, 2024

Laura Heymann , Bloomberg Law; ‘Trump Too Small’ Trademark Case Morphs Into Free Speech Debate

"The US Supreme Court’s June 13 decision in the “Trump Too Small” trademark case revealed a potential rift among the justices on First Amendment jurisprudence but did little to advance intellectual property law...

Trademark law, the Supreme Court has said in prior cases, is primarily about two goals: preventing confusion among consumers by ensuring accurate source identification and preserving trademark owners’ reputation and goodwill. For these justices, the names clause passed muster because prohibiting the registration of personal names without consent was self-evidently reasonable in light of these purposes; no further analysis was required."

Monday, June 17, 2024

USPTO Seeks Public Comment on the Impact of Artificial Intelligence on Patentability; JDSupra, June 17, 2024

Ivy Clarice EstoestaRoozbeh Gorgin , JDSupra; USPTO Seeks Public Comment on the Impact of Artificial Intelligence on Patentability

"The relentless march of technological progress presents a unique challenge for the intellectual property (IP) landscape. Earlier this year, the United States Patent and Trademark Office (USPTO) issued a Request for Comments (RFC) on inventorship guidance for AI-assisted inventions. See 89 FR 10043, available here (last visited June 10, 2024). Many responses to that inquiry, which closes on June 20, 2024, have encouraged the USPTO to investigate how AI impacts obviousness determinations. Not surprisingly, the USPTO recently issued a RFC seeking public input on the potential impact of Artificial Intelligence (AI) on prior art, the knowledge of a person having ordinary skill in the art (PHOSITA), and determinations of patentability. See89 FR 34217, p. 34217, available here (last visited June 10, 2024). This client alert summarizes that RFC and delves into the complexities surrounding AI and patents, exploring the implications for patent applications, patent owners, patent practitioners, and the future of IP law."

A classical composer just brought down a Chinese state-run YouTube channel over copyright infringement; Fast Company, June 17, 2024

CHRIS STOKEL-WALKER, Fast Company ; A classical composer just brought down a Chinese state-run YouTube channel over copyright infringement

"Despite the fact that CCTV-6’s YouTube channel, which hosted 22,000 videos and had 2.5 million followers, had received more than the three copyright strikes that would ordinarily result in a termination (or permanent ban of the account), YouTube didn’t take action beyond confirming to Muzzey that the infringing content wouldn’t be reinstated. “YouTube is immunizing CCTV-6 against my claims, even though I have an email directly from them saying, ‘We know this is false. We’re doing it to buy some time,’” he claims. (YouTube declined to comment for this story.)

Not terminating CCTV-6’s channel appeared to be in violation of YouTube’s own rules—and crucially allowed the TV network to continue to upload new content. That was the status quo, until Muzzey asked his litigation counsel to send a legal notice to YouTube pointing out the breaches of copyright law. Two days later, after the litigation counsel presented the same evidence Muzzey did, YouTube took the channel down entirely."