Wednesday, July 3, 2024

Starbucks just sued this weed business for copying its logo; Fast Company, July 2, 2024

 GRACE SNELLING, Fast Company; Starbucks just sued this weed business for copying its logo

"On the outside of its repurposed food truck, the NYC-based mobile marijuana retailer Starbuds Flowers features a logo that’s hard to miss. The familiar graphic shows a woman with a pointy crown and long wavy hair descending over her chest, surrounded by a green ring that contains the business name. In her right hand, the woman is puffing on what appears to be a lit blunt, while marijuana leaves frame the whole composition. The logo is clearly a parody of Starbucks’ Siren mascot—but the coffee giant isn’t laughing. 

On June 28, Starbucks filed a copyright and trademark infringement lawsuit against Starbuds in the U.S. District Court for the Southern District of New York."

Tuesday, July 2, 2024

Center for Investigative Reporting sues Microsoft and OpenAI for copyright infringement; Business Today, June 29, 2024

 Pranav Dixit, Business Today; Center for Investigative Reporting sues Microsoft and OpenAI for copyright infringement

"The Center for Investigative Reporting (CIR), the non-profit organisation behind Mother Jones and Reveal, filed a lawsuit against tech giants Microsoft and OpenAI on Thursday, alleging unauthorised use of their copyrighted material to train AI models. This legal action follows similar lawsuits filed by The New York Times and other media organisations...

Interestingly, some media organisations have opted for a different approach, signing licensing deals with OpenAI. These include prominent names like The Associated Press, Axel Springer, the Financial Times, Dotdash Meredith, News Corp, Vox Media, The Atlantic, and Time."

Monday, July 1, 2024

Public roundtable: AI and protections for use of an individual’s name, image, likeness, or reputation; United States Patent and Trademark Office (USPTO), Monday, August 5, 2024 9 AM EDT - 4:30 PM EDT

United States Patent and Trademark Office (USPTO); Public roundtable: AI and protections for use of an individual’s name, image, likeness, or reputation

"The United States Patent and Trademark Office (USPTO) invites members of the public to a roundtable discussion about the intersection of artificial intelligence (AI) technology and legal protections for individuals’ reputations and name, image, voice, likeness (NIL), and other indicators of identity. 

This is your opportunity to provide input on:

  • Whether existing legal protections for individuals’ NIL and reputations are sufficient
  • How these legal protections intersect with other intellectual property (IP) laws
  • How AI technology impacts existing legal protections for NIL and reputation

The feedback received will assist the USPTO’s work to develop IP policy recommendations regarding the intersection of AI and IP, in accordance with the  Executive Order on AI ssued by President Biden in October 2023.

The roundtable will consist of two sessions: A morning in-person session at USPTO headquarters in Alexandria, Virginia, and an afternoon virtual session. Both sessions will be livestreamed."

Following Scarlett Johansson Flap, Intellectual Property Expert Talks AI and the Law; UVAToday, June 24, 2024

Josette Corazza, UVAToday ; Following Scarlett Johansson Flap, Intellectual Property Expert Talks AI and the Law

"UVA School of Law professor Dotan Oliar, an expert in intellectual property who teaches art law and copyright, looked at the OpenAI controversies and how artificial intelligence is raising new legal questions and reviving old debates.

Q. What recourse does an actor have when Open AI claims to use a similar voice, but not the actual voice?

A. The “right of publicity” is the relevant body of intellectual property law available to celebrities who believe their voice (or image or likeness) was misappropriated. This is a state, rather than federal, cause of (legal) action and a right now protected in the majority of states, although the scope of protection is not the same everywhere.

In a similar case from 1988, Ford Motor Co. wanted to use Bette Midler’s voice for a commercial, and just like reportedly happened here, Midler declined. Ford went and hired a “sound-alike” person. In a then-precedential ruling, the 9th Circuit Court decided in Midler’s favor and held that a person’s voice was a protected attribute within their right of publicity."

Internet Archive forced to remove 500,000 books after publishers’ court win; Ars Technica, June 21, 2024

, Ars Technica; Internet Archive forced to remove 500,000 books after publishers’ court win

"As a result of book publishers successfully suing the Internet Archive (IA) last year, the free online library that strives to keep growing online access to books recently shrank by about 500,000 titles.

IA reported in a blog post this month that publishers abruptly forcing these takedowns triggered a "devastating loss" for readers who depend on IA to access books that are otherwise impossible or difficult to access.

To restore access, IA is now appealing, hoping to reverse the prior court's decision by convincing the US Court of Appeals in the Second Circuit that IA's controlled digital lending of its physical books should be considered fair use under copyright law."

Sunday, June 30, 2024

Tech companies battle content creators over use of copyrighted material to train AI models; The Canadian Press via CBC, June 30, 2024

Anja Karadeglija , The Canadian Press via CBC; Tech companies battle content creators over use of copyrighted material to train AI models

"Canadian creators and publishers want the government to do something about the unauthorized and usually unreported use of their content to train generative artificial intelligence systems.

But AI companies maintain that using the material to train their systems doesn't violate copyright, and say limiting its use would stymie the development of AI in Canada.

The two sides are making their cases in recently published submissions to a consultation on copyright and AI being undertaken by the federal government as it considers how Canada's copyright laws should address the emergence of generative AI systems like OpenAI's ChatGPT."

Saturday, June 29, 2024

Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web; The Verge, June 28, 2024

 Sean Hollister, The Verge; Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web

"Microsoft AI boss Mustafa Suleyman incorrectly believes that the moment you publish anything on the open web, it becomes “freeware” that anyone can freely copy and use. 

When CNBC’s Andrew Ross Sorkin asked him whether “AI companies have effectively stolen the world’s IP,” he said:

I think that with respect to content that’s already on the open web, the social contract of that content since the ‘90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been “freeware,” if you like, that’s been the understanding...

I am not a lawyer, but even I can tell you that the moment you create a work, it’s automatically protected by copyright in the US." 

 

AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’; The Guardian, June 29, 2024

 Zoe Corbin, The Guardian; AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’

"The American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer – and some of his predictions no longer seem so wacky. Kurzweil’s day job is principal researcher and AI visionary at Google. He spoke to the Observer in his personal capacity as an author, inventor and futurist...

What of the existential risk of advanced AI systems – that they could gain unanticipated powers and seriously harm humanity? AI “godfather” Geoffrey Hinton left Google last year, in part because of such concerns, while other high-profile tech leaders such as Elon Musk have also issued warnings. Earlier this month, OpenAI and Google DeepMind workers called for greater protections for whistleblowers who raise safety concerns. 

I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive...

Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? 

Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time...

The book looks in detail at AI’s job-killing potential. Should we be worried? 

Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.

There are other alarming ways, beyond job loss, that AI is promising to transform the world: spreading disinformation, causing harm through biased algorithms and supercharging surveillance. You don’t dwell much on those… 

We do have to work through certain types of issues. We have an election coming and “deepfake” videos are a worry. I think we can actually figure out [what’s fake] but if it happens right before the election we won’t have time. On issues of bias, AI is learning from humans and humans have bias. We’re making progress but we’re not where we want to be. There are also issues around fair data use by AI that need to be sorted out via the legal process."

2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work; Thomson Reuters Institute, 2024

 Thomson Reuters Institute; 2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work

"Inaccuracy, privacy worries persist -- More than half of respondents identified such worries as inaccurate responses (70%); data security (68%); privacy and confidentiality of data (62%); complying with laws and regulations (60%); and ethical and responsible usage (57%), as primary concerns for GenAI."

GenAI in focus: Understanding the latest trends and considerations; Thomson Reuters, June 27, 2024

Thomson Reuters; GenAI in focus: Understanding the latest trends and considerations

"Legal professionals, whether they work for law firms, corporate legal departments, government, or in risk and fraud, have generally positive perceptions of generative AI (GenAI). According to the professionals surveyed in the Thomson Reuters Institute’s 2024 GenAI in Professional Services report, 85% of law firm and corporate attorneys, 77% of government legal practitioners, and 82% of corporate risk professionals believe that GenAI can be applied to industry work.  

But should it be applied? There, those positive perceptions softened a bit, with 51% of law firm respondents, 60% of corporate legal practitioners, 62% of corporate risk professionals, and 40% of government legal respondents saying yes.  

In short, professionals’ perceptions of AI include concerns and interest in its capabilities. Those concerns include the ethics of AI usage and mitigating related risks. These are important considerations. But they don’t need to keep professionals from benefiting from all that GenAI can do. Professionals can minimize many of the potential risks by becoming familiar with responsible AI practices."

Friday, June 28, 2024

Native Hawaiian Intellectual Property Working Group convenes inaugural meeting; The Maui News, June 27, 2024

The Maui News; Native Hawaiian Intellectual Property Working Group convenes inaugural meeting

"On June 24, the Native Hawaiian Intellectual Property Working Group (NHIPWG) comprised of experts in native Hawaiian law, indigenous intellectual property, and cultural practitioners held its first meeting at the Hawai’i State Capitol.

The working group was established following the adoption of HCR108 by the House of Representatives in 2023. This resolution urged the creation of a nine-member working group to study policies and legislation concerning native Hawaiian intellectual property. Representative Darius K. Kila, who introduced HCR108, explained that the formation of the group represents efforts to protect the intellectual property rights of Kānaka Maoli, as well as their cultural expressions, language, and art form."

Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix; Fortune, June 27, 2024

 , Fortune; Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix

"The ethics of generative AI has been in the news this week. AI companies have been accused of taking copyrighted creative works without permission to train their models, and there’s been documentation of those models producing outputs that plagiarize from that training data. Today, I’m going to make the case that generative AI can never be ethical as long as three issues that are currently inherent to the technology remain. First, there’s the fact that generative AI was created using stolen data. Second, it’s built on exploitative labor. And third, it’s exponentially worsening the energy crisis at a pivotal time when we need to be scaling back, not accelerating, our energy demands and environmental impact."

Tuesday, June 25, 2024

As AI Marches on Music Industry, Record Labels Counter With $350M Lawsuits In Assertive Move; The Hollywood Reporter, June 24, 2024

 Winston Cho, The Hollywood Reporter; As AI Marches on Music Industry, Record Labels Counter With $350M Lawsuits In Assertive Move

"There’ve been over 25 copyright infringement lawsuits filed against artificial intelligence companies from most corners of the creative industries. Groups that’ve sued include artists, authors and news publishers, among various others alleging mass theft of their copyrighted works to teach AI systems. Of those entities, record companies have taken the most aggressive stance thus far in trying to get out in front of the technology."

Monday, June 24, 2024

AI: World's biggest music labels sue over copyright; BBC News, June 24, 2024

Natalie Sherman , BBC News; AI: World's biggest music labels sue over copyright

"The world's biggest record labels are suing two artificial intelligence (AI) start-ups over alleged copyright violation in a potentially landmark case.

Firms including Sony Music, Universal Music Group and Warner Records say Suno and Udio have committed copyright infringement on an "almost unimaginable scale".

They claim the pair's software steals music to "spit out" similar work and ask for compensation of $150,000 per work."

How to Fix “AI’s Original Sin”; O'Reilly, June 18, 2024

 Tim O’Reilly, O'Reilly; How to Fix “AI’s Original Sin”

"In conversation with reporter Cade Metz, who broke the story, on the New York Times podcast The Daily, host Michael Barbaro called copyright violation “AI’s Original Sin.”

At the very least, copyright appears to be one of the major fronts so far in the war over who gets to profit from generative AI. It’s not at all clear yet who is on the right side of the law. In the remarkable essay “Talkin’ Bout AI Generation: Copyright and the Generative-AI Supply Chain,” Cornell’s Katherine Lee and A. Feder Cooper and James Grimmelmann of Microsoft Research and Yale note:

Copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation, because there are connections everywhere. Whether the output of a generative AI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply.

But it seems less important to get into the fine points of copyright law and arguments over liability for infringement, and instead to explore the political economy of copyrighted content in the emerging world of AI services: Who will get what, and why?"

Sunday, June 23, 2024

Intellectual property and entrepreneurship resources for the military community; United States Patent and Trademark Office (USPTO), May 31, 2024

United States Patent and Trademark Office (USPTO); Intellectual property and entrepreneurship resources for the military community

"Earlier this month at Fort Buchanan, Puerto Rico, an Army veteran and business owner said he wished this valuable USPTO program had been around when he started his business.

Entrepreneurship Essentials Roadshows are part of the From Service to Success program and reflect the USPTO’s mission of inclusive innovation, meeting potential entrepreneurs and small business owners where they are with targeted programming. Roadshows visit military bases worldwide and help by:

  • Providing encouragement from military leadership.
  • Sharing tips from experts on obtaining funding, identifying markets, writing and executing business plans, and hearing from other entrepreneurs.
  • Offering practical information to protect valuable innovations.
  • Networking with other entrepreneurs."

Saturday, June 22, 2024

NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’; Los Angeles Times, June 21, 2024

 Samantha Masunaga , Los Angeles Times; NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’

"Artificial intelligence is “exciting,” but guardrails must be put in place to protect labor, intellectual property and ethics, NBCUniversal Studio Group Chairman Donna Langley said Friday at an entertainment industry law conference.

During a wide-ranging, on-stage conversation at the UCLA Entertainment Symposium, the media chief emphasized that first, “the labor piece of it has to be right,” a proclamation that was met with applause from the audience. 

“Nor should we infringe on people’s rights,” she said, adding that there also needs to be “very good, clever, sophisticated copyright laws around our IP.”...

AI has emerged as a major issue in Hollywood, as technology companies have increasingly courted studios and industry players. But it is a delicate dance, as entertainment industry executives want to avoid offending actors, writers and other workers who view the technology as a threat to their jobs."

AI lab at Christian university aims to bring morality and ethics to artificial intelligence; Fox News, June 17, 2024

 Christine Rousselle  , Fox News; AI lab at Christian university aims to bring morality and ethics to artificial intelligence

"A new AI Lab at a Christian university in California is grounded in theological values — something the school hopes will help to prevent Christians and others of faith from falling behind when it comes to this new technology.

"The AI Lab at Biola University is a dedicated space where students, faculty and staff converge to explore the intricacies of artificial intelligence," Dr. Michael J. Arena told Fox News Digital...

The lab is meant to "be a crucible for shaping the future of AI," Arena said via email, noting the lab aims to do this by "providing education, fostering dialogue and leading innovative AI projects rooted in Christian beliefs." 

While AI has been controversial, Arena believes that educational institutions have to "embrace AI or risk falling behind" in technology. 

"If we don't engage, we risk falling asleep at the wheel," Arena said, referring to Christian and faith-centered institutions. 

He pointed to social media as an example of how a failure to properly engage with an emerging technology with a strong approach to moral values has had disastrous results."

Oxford University institute hosts AI ethics conference; Oxford Mail, June 21, 2024

 Jacob Manuschka , Oxford Mail; Oxford University institute hosts AI ethics conference

"On June 20, 'The Lyceum Project: AI Ethics with Aristotle' explored the ethical regulation of AI.

This conference, set adjacent to the ancient site of Aristotle’s school, showcased some of the greatest philosophical minds and featured an address from Greek prime minister, Kyriakos Mitsotakis.

Professor John Tasioulas, director of the Institute for Ethics in AI, said: "The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI.

"We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges...

The conference was held in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research."

Friday, June 21, 2024

Using AI to Create Content? Watch Out for Copyright Violations; Chicago Business Attorney Blog, June 20, 2024

 , Chicago Business Attorney Blog; Using AI to Create Content? Watch Out for Copyright Violations

"Businesses using generative AI programs like ChatGPT to create any content—whether for blogs, websites or other marketing materials, and whether text, visuals, sound or video—need to ensure that they’re not inadvertently using copyrighted materials in the process.

Clearly, the times they are a changing….and businesses need to adapt to the changes.  Employers should promulgate messages to their employees and contractors updating their policy manuals to ensure that communications professionals and others crafting content are aware of the risks of using AI-generated materials, which go beyond the possibility that they are “hallucinated” rather than factual—although that’s worth considering, too."

Tuesday, June 18, 2024

POPE FRANCIS ATTENDS THE G7 SESSION ON ARTIFICIAL INTELLIGENCE: ADDRESS OF HIS HOLINESS POPE FRANCIS, June 14, 2024

 The Vatican, POPE FRANCIS ATTENDS THE G7 SESSION ON ARTIFICIAL INTELLIGENCE: 

ADDRESS OF HIS HOLINESS POPE FRANCISBorgo Egnazia (Puglia)

[Excerpt]

            "An exciting and fearsome tool


 Esteemed ladies and gentlemen,

I address you today, the leaders of the Intergovernmental Forum of the G7, concerning the effects of artificial intelligence on the future of humanity.

“Sacred Scripture attests that God bestowed his Spirit upon human beings so that they might have ‘skill and understanding and knowledge in every craft’ ( Ex 35:31)”. [1] Science and technology are therefore brilliant products of the creative potential of human beings. [2]

Indeed, artificial intelligence arises precisely from the use of this God-given creative potential.

As we know, artificial intelligence is an extremely powerful tool, employed in many kinds of human activity: from medicine to the world of work; from culture to the field of communications; from education to politics. It is now safe to assume that its use will increasingly influence the way we live, our social relationships and even the way we conceive of our identity as human beings. [3]

The question of artificial intelligence, however, is often perceived as ambiguous: on the one hand, it generates excitement for the possibilities it offers, while on the other it gives rise to fear for the consequences it foreshadows. In this regard, we could say that all of us, albeit to varying degrees, experience two emotions: we are enthusiastic when we imagine the advances that can result from artificial intelligence but, at the same time, we are fearful when we acknowledge the dangers inherent in its use. [4]"

‘Trump Too Small’ Trademark Case Morphs Into Free Speech Debate; Bloomberg Law, June 18, 2024

Laura Heymann , Bloomberg Law; ‘Trump Too Small’ Trademark Case Morphs Into Free Speech Debate

"The US Supreme Court’s June 13 decision in the “Trump Too Small” trademark case revealed a potential rift among the justices on First Amendment jurisprudence but did little to advance intellectual property law...

Trademark law, the Supreme Court has said in prior cases, is primarily about two goals: preventing confusion among consumers by ensuring accurate source identification and preserving trademark owners’ reputation and goodwill. For these justices, the names clause passed muster because prohibiting the registration of personal names without consent was self-evidently reasonable in light of these purposes; no further analysis was required."

Monday, June 17, 2024

USPTO Seeks Public Comment on the Impact of Artificial Intelligence on Patentability; JDSupra, June 17, 2024

Ivy Clarice EstoestaRoozbeh Gorgin , JDSupra; USPTO Seeks Public Comment on the Impact of Artificial Intelligence on Patentability

"The relentless march of technological progress presents a unique challenge for the intellectual property (IP) landscape. Earlier this year, the United States Patent and Trademark Office (USPTO) issued a Request for Comments (RFC) on inventorship guidance for AI-assisted inventions. See 89 FR 10043, available here (last visited June 10, 2024). Many responses to that inquiry, which closes on June 20, 2024, have encouraged the USPTO to investigate how AI impacts obviousness determinations. Not surprisingly, the USPTO recently issued a RFC seeking public input on the potential impact of Artificial Intelligence (AI) on prior art, the knowledge of a person having ordinary skill in the art (PHOSITA), and determinations of patentability. See89 FR 34217, p. 34217, available here (last visited June 10, 2024). This client alert summarizes that RFC and delves into the complexities surrounding AI and patents, exploring the implications for patent applications, patent owners, patent practitioners, and the future of IP law."

A classical composer just brought down a Chinese state-run YouTube channel over copyright infringement; Fast Company, June 17, 2024

CHRIS STOKEL-WALKER, Fast Company ; A classical composer just brought down a Chinese state-run YouTube channel over copyright infringement

"Despite the fact that CCTV-6’s YouTube channel, which hosted 22,000 videos and had 2.5 million followers, had received more than the three copyright strikes that would ordinarily result in a termination (or permanent ban of the account), YouTube didn’t take action beyond confirming to Muzzey that the infringing content wouldn’t be reinstated. “YouTube is immunizing CCTV-6 against my claims, even though I have an email directly from them saying, ‘We know this is false. We’re doing it to buy some time,’” he claims. (YouTube declined to comment for this story.)

Not terminating CCTV-6’s channel appeared to be in violation of YouTube’s own rules—and crucially allowed the TV network to continue to upload new content. That was the status quo, until Muzzey asked his litigation counsel to send a legal notice to YouTube pointing out the breaches of copyright law. Two days later, after the litigation counsel presented the same evidence Muzzey did, YouTube took the channel down entirely."

Friday, June 14, 2024

Pope Francis is first pontiff to address G7 leaders with AI speech; Axios, June 14, 2024

"Pope Francis made history Friday as the first pontiff to speak at the Group of Seven meeting in Fasano, Italy, where he discussed his concerns with artificial intelligence.

Why it matters: The pope has long urged caution around AI, calling it "a fascinating tool and also a terrifying one," during his remarks Friday even as he acknowledged its potential applications in medicine, labor, culture, communications, education and politics. 

  • "The holy scriptures say that God gave to human beings his spirit in order for them to have wisdom, intelligence and knowledge in all kinds of tasks," he said. "Science and technology are therefore extraordinary products of the potential which is active in us human beings.""

OpenAI Asks to Merge New York Times, Daily News Copyright Suits; Bloomberg Law, June 14, 2024

Aruni Soni , Bloomberg Law; OpenAI Asks to Merge New York Times, Daily News Copyright Suits

"OpenAI Inc. asked a federal court to combine the “functionally identical” copyright lawsuits filed against it by The New York Times Co. and The Daily News LP alleging it used their content to create artificial intelligence services."

How Pope Francis became the AI ethicist for tech titans and world leaders; The Washington Post, June 14, 2024

  

, The Washington Post; How Pope Francis became the AI ethicist for tech titans and world leaders

"In the same way the pope sought to elevate the urgency of climate change, Francis now is zooming in on AI — a technology he has said poses “a risk to our survival” and could “endanger our common home.”

His concerns center less on sci-fi movie predictions of malevolent machines, or how the possibility of sentient AI might challenge the fundamental definition of life. Rather, he has focused on the apocalyptic potential of weaponized AI. He fears the elimination of human empathy as algorithms begin to decide the outcome of everything from asylum applications to bank loans. He has heralded “the exciting opportunities” of the technology, but frets over its power to deepen inequality within and among nations. Those risks, he insists, must be managed through a global treaty to regulate AI."

Thursday, June 13, 2024

Voice of America (VoA), June 13, 2024

 Matt Dibble, Voice of America (VoA); AI copyright fight turns to disclosing original content

"Artists and other creators say their works have been used to build the multibillion-dollar generative AI industry without any compensation for them. Matt Dibble reports on a proposed U.S. law that would force AI companies to reveal their sources."

Wednesday, June 12, 2024

Patently insufficient: a new intellectual property treaty does little to protect Māori traditional knowledge; The Conversation, June 9, 2024

 Senior Lecturer Above the Bar, University of CanterburySenior Lecturer in Management, Te Herenga Waka — Victoria University of WellingtonProfessor of Commercial Law, Te Herenga Waka — Victoria University of Wellington, The Conversation; ; Patently insufficient: a new intellectual property treaty does little to protect Māori traditional knowledge

"The problem of “biopiracy” – the misappropriation and patenting for profit of Indigenous knowledge – has been on the rise for some time. So a global treaty aimed at protecting traditional knowledge and genetic resources should be a welcome development.

In late May, the World Intellectual Property Organization (WIPO) adopted the Treaty on Intellectual Property, Genetic Resources and Traditional Knowledge. It is the first international agreement on intellectual property that includes provisions on Indigenous peoples’ knowledge. 

More than 20 years in the making, it represents the culmination of negotiations between the 193 WIPO member states since 2000. And on the face of it, the treaty appears to be an important intervention to prevent biopiracy. 

However, the new agreement is unlikely to lead to major changes to New Zealand law, or improve the rights of Māori to own or control their intellectual property and taonga (treasured possessions). Given the well-documented misappropriation of Māori knowledge and taonga, more substantive protections are still needed."