Saturday, June 29, 2024

AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’; The Guardian, June 29, 2024

 Zoe Corbin, The Guardian; AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’

"The American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer – and some of his predictions no longer seem so wacky. Kurzweil’s day job is principal researcher and AI visionary at Google. He spoke to the Observer in his personal capacity as an author, inventor and futurist...

What of the existential risk of advanced AI systems – that they could gain unanticipated powers and seriously harm humanity? AI “godfather” Geoffrey Hinton left Google last year, in part because of such concerns, while other high-profile tech leaders such as Elon Musk have also issued warnings. Earlier this month, OpenAI and Google DeepMind workers called for greater protections for whistleblowers who raise safety concerns. 

I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive...

Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? 

Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time...

The book looks in detail at AI’s job-killing potential. Should we be worried? 

Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.

There are other alarming ways, beyond job loss, that AI is promising to transform the world: spreading disinformation, causing harm through biased algorithms and supercharging surveillance. You don’t dwell much on those… 

We do have to work through certain types of issues. We have an election coming and “deepfake” videos are a worry. I think we can actually figure out [what’s fake] but if it happens right before the election we won’t have time. On issues of bias, AI is learning from humans and humans have bias. We’re making progress but we’re not where we want to be. There are also issues around fair data use by AI that need to be sorted out via the legal process."

2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work; Thomson Reuters Institute, 2024

 Thomson Reuters Institute; 2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work

"Inaccuracy, privacy worries persist -- More than half of respondents identified such worries as inaccurate responses (70%); data security (68%); privacy and confidentiality of data (62%); complying with laws and regulations (60%); and ethical and responsible usage (57%), as primary concerns for GenAI."

GenAI in focus: Understanding the latest trends and considerations; Thomson Reuters, June 27, 2024

Thomson Reuters; GenAI in focus: Understanding the latest trends and considerations

"Legal professionals, whether they work for law firms, corporate legal departments, government, or in risk and fraud, have generally positive perceptions of generative AI (GenAI). According to the professionals surveyed in the Thomson Reuters Institute’s 2024 GenAI in Professional Services report, 85% of law firm and corporate attorneys, 77% of government legal practitioners, and 82% of corporate risk professionals believe that GenAI can be applied to industry work.  

But should it be applied? There, those positive perceptions softened a bit, with 51% of law firm respondents, 60% of corporate legal practitioners, 62% of corporate risk professionals, and 40% of government legal respondents saying yes.  

In short, professionals’ perceptions of AI include concerns and interest in its capabilities. Those concerns include the ethics of AI usage and mitigating related risks. These are important considerations. But they don’t need to keep professionals from benefiting from all that GenAI can do. Professionals can minimize many of the potential risks by becoming familiar with responsible AI practices."

Friday, June 28, 2024

Native Hawaiian Intellectual Property Working Group convenes inaugural meeting; The Maui News, June 27, 2024

The Maui News; Native Hawaiian Intellectual Property Working Group convenes inaugural meeting

"On June 24, the Native Hawaiian Intellectual Property Working Group (NHIPWG) comprised of experts in native Hawaiian law, indigenous intellectual property, and cultural practitioners held its first meeting at the Hawai’i State Capitol.

The working group was established following the adoption of HCR108 by the House of Representatives in 2023. This resolution urged the creation of a nine-member working group to study policies and legislation concerning native Hawaiian intellectual property. Representative Darius K. Kila, who introduced HCR108, explained that the formation of the group represents efforts to protect the intellectual property rights of Kānaka Maoli, as well as their cultural expressions, language, and art form."

Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix; Fortune, June 27, 2024

 , Fortune; Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix

"The ethics of generative AI has been in the news this week. AI companies have been accused of taking copyrighted creative works without permission to train their models, and there’s been documentation of those models producing outputs that plagiarize from that training data. Today, I’m going to make the case that generative AI can never be ethical as long as three issues that are currently inherent to the technology remain. First, there’s the fact that generative AI was created using stolen data. Second, it’s built on exploitative labor. And third, it’s exponentially worsening the energy crisis at a pivotal time when we need to be scaling back, not accelerating, our energy demands and environmental impact."

Tuesday, June 25, 2024

As AI Marches on Music Industry, Record Labels Counter With $350M Lawsuits In Assertive Move; The Hollywood Reporter, June 24, 2024

 Winston Cho, The Hollywood Reporter; As AI Marches on Music Industry, Record Labels Counter With $350M Lawsuits In Assertive Move

"There’ve been over 25 copyright infringement lawsuits filed against artificial intelligence companies from most corners of the creative industries. Groups that’ve sued include artists, authors and news publishers, among various others alleging mass theft of their copyrighted works to teach AI systems. Of those entities, record companies have taken the most aggressive stance thus far in trying to get out in front of the technology."

Monday, June 24, 2024

AI: World's biggest music labels sue over copyright; BBC News, June 24, 2024

Natalie Sherman , BBC News; AI: World's biggest music labels sue over copyright

"The world's biggest record labels are suing two artificial intelligence (AI) start-ups over alleged copyright violation in a potentially landmark case.

Firms including Sony Music, Universal Music Group and Warner Records say Suno and Udio have committed copyright infringement on an "almost unimaginable scale".

They claim the pair's software steals music to "spit out" similar work and ask for compensation of $150,000 per work."

How to Fix “AI’s Original Sin”; O'Reilly, June 18, 2024

 Tim O’Reilly, O'Reilly; How to Fix “AI’s Original Sin”

"In conversation with reporter Cade Metz, who broke the story, on the New York Times podcast The Daily, host Michael Barbaro called copyright violation “AI’s Original Sin.”

At the very least, copyright appears to be one of the major fronts so far in the war over who gets to profit from generative AI. It’s not at all clear yet who is on the right side of the law. In the remarkable essay “Talkin’ Bout AI Generation: Copyright and the Generative-AI Supply Chain,” Cornell’s Katherine Lee and A. Feder Cooper and James Grimmelmann of Microsoft Research and Yale note:

Copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation, because there are connections everywhere. Whether the output of a generative AI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply.

But it seems less important to get into the fine points of copyright law and arguments over liability for infringement, and instead to explore the political economy of copyrighted content in the emerging world of AI services: Who will get what, and why?"

Sunday, June 23, 2024

Intellectual property and entrepreneurship resources for the military community; United States Patent and Trademark Office (USPTO), May 31, 2024

United States Patent and Trademark Office (USPTO); Intellectual property and entrepreneurship resources for the military community

"Earlier this month at Fort Buchanan, Puerto Rico, an Army veteran and business owner said he wished this valuable USPTO program had been around when he started his business.

Entrepreneurship Essentials Roadshows are part of the From Service to Success program and reflect the USPTO’s mission of inclusive innovation, meeting potential entrepreneurs and small business owners where they are with targeted programming. Roadshows visit military bases worldwide and help by:

  • Providing encouragement from military leadership.
  • Sharing tips from experts on obtaining funding, identifying markets, writing and executing business plans, and hearing from other entrepreneurs.
  • Offering practical information to protect valuable innovations.
  • Networking with other entrepreneurs."

Saturday, June 22, 2024

NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’; Los Angeles Times, June 21, 2024

 Samantha Masunaga , Los Angeles Times; NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’

"Artificial intelligence is “exciting,” but guardrails must be put in place to protect labor, intellectual property and ethics, NBCUniversal Studio Group Chairman Donna Langley said Friday at an entertainment industry law conference.

During a wide-ranging, on-stage conversation at the UCLA Entertainment Symposium, the media chief emphasized that first, “the labor piece of it has to be right,” a proclamation that was met with applause from the audience. 

“Nor should we infringe on people’s rights,” she said, adding that there also needs to be “very good, clever, sophisticated copyright laws around our IP.”...

AI has emerged as a major issue in Hollywood, as technology companies have increasingly courted studios and industry players. But it is a delicate dance, as entertainment industry executives want to avoid offending actors, writers and other workers who view the technology as a threat to their jobs."

AI lab at Christian university aims to bring morality and ethics to artificial intelligence; Fox News, June 17, 2024

 Christine Rousselle  , Fox News; AI lab at Christian university aims to bring morality and ethics to artificial intelligence

"A new AI Lab at a Christian university in California is grounded in theological values — something the school hopes will help to prevent Christians and others of faith from falling behind when it comes to this new technology.

"The AI Lab at Biola University is a dedicated space where students, faculty and staff converge to explore the intricacies of artificial intelligence," Dr. Michael J. Arena told Fox News Digital...

The lab is meant to "be a crucible for shaping the future of AI," Arena said via email, noting the lab aims to do this by "providing education, fostering dialogue and leading innovative AI projects rooted in Christian beliefs." 

While AI has been controversial, Arena believes that educational institutions have to "embrace AI or risk falling behind" in technology. 

"If we don't engage, we risk falling asleep at the wheel," Arena said, referring to Christian and faith-centered institutions. 

He pointed to social media as an example of how a failure to properly engage with an emerging technology with a strong approach to moral values has had disastrous results."

Oxford University institute hosts AI ethics conference; Oxford Mail, June 21, 2024

 Jacob Manuschka , Oxford Mail; Oxford University institute hosts AI ethics conference

"On June 20, 'The Lyceum Project: AI Ethics with Aristotle' explored the ethical regulation of AI.

This conference, set adjacent to the ancient site of Aristotle’s school, showcased some of the greatest philosophical minds and featured an address from Greek prime minister, Kyriakos Mitsotakis.

Professor John Tasioulas, director of the Institute for Ethics in AI, said: "The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI.

"We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges...

The conference was held in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research."

Friday, June 21, 2024

Using AI to Create Content? Watch Out for Copyright Violations; Chicago Business Attorney Blog, June 20, 2024

 , Chicago Business Attorney Blog; Using AI to Create Content? Watch Out for Copyright Violations

"Businesses using generative AI programs like ChatGPT to create any content—whether for blogs, websites or other marketing materials, and whether text, visuals, sound or video—need to ensure that they’re not inadvertently using copyrighted materials in the process.

Clearly, the times they are a changing….and businesses need to adapt to the changes.  Employers should promulgate messages to their employees and contractors updating their policy manuals to ensure that communications professionals and others crafting content are aware of the risks of using AI-generated materials, which go beyond the possibility that they are “hallucinated” rather than factual—although that’s worth considering, too."

Tuesday, June 18, 2024

POPE FRANCIS ATTENDS THE G7 SESSION ON ARTIFICIAL INTELLIGENCE: ADDRESS OF HIS HOLINESS POPE FRANCIS, June 14, 2024

 The Vatican, POPE FRANCIS ATTENDS THE G7 SESSION ON ARTIFICIAL INTELLIGENCE: 

ADDRESS OF HIS HOLINESS POPE FRANCISBorgo Egnazia (Puglia)

[Excerpt]

            "An exciting and fearsome tool


 Esteemed ladies and gentlemen,

I address you today, the leaders of the Intergovernmental Forum of the G7, concerning the effects of artificial intelligence on the future of humanity.

“Sacred Scripture attests that God bestowed his Spirit upon human beings so that they might have ‘skill and understanding and knowledge in every craft’ ( Ex 35:31)”. [1] Science and technology are therefore brilliant products of the creative potential of human beings. [2]

Indeed, artificial intelligence arises precisely from the use of this God-given creative potential.

As we know, artificial intelligence is an extremely powerful tool, employed in many kinds of human activity: from medicine to the world of work; from culture to the field of communications; from education to politics. It is now safe to assume that its use will increasingly influence the way we live, our social relationships and even the way we conceive of our identity as human beings. [3]

The question of artificial intelligence, however, is often perceived as ambiguous: on the one hand, it generates excitement for the possibilities it offers, while on the other it gives rise to fear for the consequences it foreshadows. In this regard, we could say that all of us, albeit to varying degrees, experience two emotions: we are enthusiastic when we imagine the advances that can result from artificial intelligence but, at the same time, we are fearful when we acknowledge the dangers inherent in its use. [4]"

‘Trump Too Small’ Trademark Case Morphs Into Free Speech Debate; Bloomberg Law, June 18, 2024

Laura Heymann , Bloomberg Law; ‘Trump Too Small’ Trademark Case Morphs Into Free Speech Debate

"The US Supreme Court’s June 13 decision in the “Trump Too Small” trademark case revealed a potential rift among the justices on First Amendment jurisprudence but did little to advance intellectual property law...

Trademark law, the Supreme Court has said in prior cases, is primarily about two goals: preventing confusion among consumers by ensuring accurate source identification and preserving trademark owners’ reputation and goodwill. For these justices, the names clause passed muster because prohibiting the registration of personal names without consent was self-evidently reasonable in light of these purposes; no further analysis was required."

Monday, June 17, 2024

USPTO Seeks Public Comment on the Impact of Artificial Intelligence on Patentability; JDSupra, June 17, 2024

Ivy Clarice EstoestaRoozbeh Gorgin , JDSupra; USPTO Seeks Public Comment on the Impact of Artificial Intelligence on Patentability

"The relentless march of technological progress presents a unique challenge for the intellectual property (IP) landscape. Earlier this year, the United States Patent and Trademark Office (USPTO) issued a Request for Comments (RFC) on inventorship guidance for AI-assisted inventions. See 89 FR 10043, available here (last visited June 10, 2024). Many responses to that inquiry, which closes on June 20, 2024, have encouraged the USPTO to investigate how AI impacts obviousness determinations. Not surprisingly, the USPTO recently issued a RFC seeking public input on the potential impact of Artificial Intelligence (AI) on prior art, the knowledge of a person having ordinary skill in the art (PHOSITA), and determinations of patentability. See89 FR 34217, p. 34217, available here (last visited June 10, 2024). This client alert summarizes that RFC and delves into the complexities surrounding AI and patents, exploring the implications for patent applications, patent owners, patent practitioners, and the future of IP law."

A classical composer just brought down a Chinese state-run YouTube channel over copyright infringement; Fast Company, June 17, 2024

CHRIS STOKEL-WALKER, Fast Company ; A classical composer just brought down a Chinese state-run YouTube channel over copyright infringement

"Despite the fact that CCTV-6’s YouTube channel, which hosted 22,000 videos and had 2.5 million followers, had received more than the three copyright strikes that would ordinarily result in a termination (or permanent ban of the account), YouTube didn’t take action beyond confirming to Muzzey that the infringing content wouldn’t be reinstated. “YouTube is immunizing CCTV-6 against my claims, even though I have an email directly from them saying, ‘We know this is false. We’re doing it to buy some time,’” he claims. (YouTube declined to comment for this story.)

Not terminating CCTV-6’s channel appeared to be in violation of YouTube’s own rules—and crucially allowed the TV network to continue to upload new content. That was the status quo, until Muzzey asked his litigation counsel to send a legal notice to YouTube pointing out the breaches of copyright law. Two days later, after the litigation counsel presented the same evidence Muzzey did, YouTube took the channel down entirely."

Friday, June 14, 2024

Pope Francis is first pontiff to address G7 leaders with AI speech; Axios, June 14, 2024

"Pope Francis made history Friday as the first pontiff to speak at the Group of Seven meeting in Fasano, Italy, where he discussed his concerns with artificial intelligence.

Why it matters: The pope has long urged caution around AI, calling it "a fascinating tool and also a terrifying one," during his remarks Friday even as he acknowledged its potential applications in medicine, labor, culture, communications, education and politics. 

  • "The holy scriptures say that God gave to human beings his spirit in order for them to have wisdom, intelligence and knowledge in all kinds of tasks," he said. "Science and technology are therefore extraordinary products of the potential which is active in us human beings.""

OpenAI Asks to Merge New York Times, Daily News Copyright Suits; Bloomberg Law, June 14, 2024

Aruni Soni , Bloomberg Law; OpenAI Asks to Merge New York Times, Daily News Copyright Suits

"OpenAI Inc. asked a federal court to combine the “functionally identical” copyright lawsuits filed against it by The New York Times Co. and The Daily News LP alleging it used their content to create artificial intelligence services."

How Pope Francis became the AI ethicist for tech titans and world leaders; The Washington Post, June 14, 2024

  

, The Washington Post; How Pope Francis became the AI ethicist for tech titans and world leaders

"In the same way the pope sought to elevate the urgency of climate change, Francis now is zooming in on AI — a technology he has said poses “a risk to our survival” and could “endanger our common home.”

His concerns center less on sci-fi movie predictions of malevolent machines, or how the possibility of sentient AI might challenge the fundamental definition of life. Rather, he has focused on the apocalyptic potential of weaponized AI. He fears the elimination of human empathy as algorithms begin to decide the outcome of everything from asylum applications to bank loans. He has heralded “the exciting opportunities” of the technology, but frets over its power to deepen inequality within and among nations. Those risks, he insists, must be managed through a global treaty to regulate AI."

Thursday, June 13, 2024

Voice of America (VoA), June 13, 2024

 Matt Dibble, Voice of America (VoA); AI copyright fight turns to disclosing original content

"Artists and other creators say their works have been used to build the multibillion-dollar generative AI industry without any compensation for them. Matt Dibble reports on a proposed U.S. law that would force AI companies to reveal their sources."

Wednesday, June 12, 2024

Patently insufficient: a new intellectual property treaty does little to protect Māori traditional knowledge; The Conversation, June 9, 2024

 Senior Lecturer Above the Bar, University of CanterburySenior Lecturer in Management, Te Herenga Waka — Victoria University of WellingtonProfessor of Commercial Law, Te Herenga Waka — Victoria University of Wellington, The Conversation; ; Patently insufficient: a new intellectual property treaty does little to protect Māori traditional knowledge

"The problem of “biopiracy” – the misappropriation and patenting for profit of Indigenous knowledge – has been on the rise for some time. So a global treaty aimed at protecting traditional knowledge and genetic resources should be a welcome development.

In late May, the World Intellectual Property Organization (WIPO) adopted the Treaty on Intellectual Property, Genetic Resources and Traditional Knowledge. It is the first international agreement on intellectual property that includes provisions on Indigenous peoples’ knowledge. 

More than 20 years in the making, it represents the culmination of negotiations between the 193 WIPO member states since 2000. And on the face of it, the treaty appears to be an important intervention to prevent biopiracy. 

However, the new agreement is unlikely to lead to major changes to New Zealand law, or improve the rights of Māori to own or control their intellectual property and taonga (treasured possessions). Given the well-documented misappropriation of Māori knowledge and taonga, more substantive protections are still needed." 

Adobe Responds to AI Fears With Plans for Updated Legal Terms; Bloomberg Law, June 12, 2024

Cassandre Coyer and Aruni Soni, Bloomberg Law; Adobe Responds to AI Fears With Plans for Updated Legal Terms

"“As technology evolves, we have to evolve,” Dana Rao, Adobe’s general counsel, said in an interview with Bloomberg Law. “The legal terms have to evolve, too. And that’s really the lesson that we’re sort of internalizing here.”

Over the weekend, some Adobe customers revolted on social media, crying foul at updated terms of use they claimed allowed Adobe to seize their intellectual property and use their data to feed AI models. 

The Photoshop and Illustrator maker responded with multiple blog posts over several days seeking to reassure users it wasn’t stealing their content, including a pledge to quickly rewrite its user agreement in clearer language. Rao said Tuesday that Adobe will be issuing updated terms of use on June 18 in which it will specifically state the company doesn’t train its Firefly AI models on its cloud content.

The unexpected online storm around the updates is the latest example of how sweeping technological changes—such as the rise of generative AI—have bolstered users’ fears of copyright violations and privacy invasions. That sentiment is part of the landscape the tech industry must navigate to serve a creator community increasingly on edge.

What happened is “more of a lesson in terms of how to present terms of use and roll out updates in a way that can address or alleviate customer concerns, especially in the era of AI and increased concern over privacy,” said Los Angeles-based advertising attorney Robert Freund."

Big Tech Launches Campaign to Defend AI Use; The Hollywood Reporter, June 6, 2024

Winston Cho , The Hollywood Reporter; Big Tech Launches Campaign to Defend AI Use

"Chamber of Progress, a tech industry coalition whose members include Amazon, Apple and Meta, is launching a campaign to defend the legality of using copyrighted works to train artificial intelligence systems.

The group says the campaign, called “Generate and Create” and unveiled on Thursday, will aim to highlight “how artists use generative AI to enhance their creative output” and “showcase how AI lowers barriers for producing art” as part of an initiative to “defend the longstanding legal principle of fair use under copyright law.”"

EFF-Austin: Generative AI & Copyright; June 11, 2024

 EFF-Austin: Generative AI & Copyright

"Generative AI & Copyright

From the Organizers:

One of the key legal issues currently being debated around Generative AI is whether or not tools like ChatGPT and Midjourney are in violation of copyright; namely, whether the act of using existing artistic content as training data to produce new content is a derivative or transformational use of the training data. In this talk, Ed will walk us through both what current copyright law has to say about the legal status of AI art, as well as what he believes the legal framework around AI art should be going forward.

Our speaker this month is Ed Cavazos. Ed Cavazos is a technology attorney and the managing partner of Pillsbury Winthrop Shaw Pittman’s Austin office. His practice focuses on intellectual property and cutting-edge issues in the internet, e-commerce, video game, and software industries. He has been involved in a wide variety of high-profile legal matters, including assisting on the Steve Jackson Games lawsuit against the U.S. Secret Service while still a law student, successfully defending tech companies against patent and copyright trolls, and representing some of the earliest entrepreneurs developing NFT and Metaverse-related business models. His 1994 book, Cyberspace and the Law (MIT Press) was one of the first texts exploring internet legal issues and over the years he has written a number of law review articles exploring similar issues. Ed was one of the founders of EFF-Austin in the early 1990’s and has, since then, served in various officer, board and advisory board roles.

Talk will be livestreamed at https://www.youtube.com/user/austintechlive and will later be archived at https://www.youtube.com/user/effaustin. Questions for the speaker from virtual attendees may be submitted via the Youtube livestream chat, our Twitter account, @EFFaustin, or our Mastodon account, @effaustin."

AI Copyright Issues ‘on Shifting Sands’ but Legal Protections Are Coming, Experts Tell PGA Produced By Conference; The Wrap, June 9, 2024

 , The Wrap; AI Copyright Issues ‘on Shifting Sands’ but Legal Protections Are Coming, Experts Tell PGA Produced By Conference

"Renard T. Jenkins — a former Warner Bros. Discovery exec who’s now president and CEO of I2A2 Technologies, Labs and Studios — said his company is working to help create an infrastructure to help with authenticating content.

“Back in the old days, you had watermarks,” he said, noting that file-based content can be altered to remove information about the original creator. “What we are attempting to do is create an infrastructure and ecosystem that would allow us to track every single iteration of a piece of content from its origins all the way through the distribution.” 

For that to happen, the PGA and other organizations would have to agree to a new standard. “It’s a very heavy lift,” he said, comparing the necessary level of cooperation to a cross-mafia agreement, describing it as the “five families of Hollywood coming together.”

He also suggested that blockchain technology could be used to “audit and track” every change to a piece of content. It’s the same tech used for Bitcoin and the much-maligned NFT digital assets."

Saturday, June 8, 2024

NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI; The National Law Review, June 4, 2024

  James G. Gatto of Sheppard, Mullin, Richter & Hampton LLP, The National Law Review; NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI

"The number of bar associations that have issued AI ethics guidance continues to grow, with NJ being the most recent. In its May 2024 report (Report), the NJ Task Force on Artificial Intelligence and the Law made a number of recommendations and findings as detailed below. With this Report, NJ joins the list of other bar associations that have issued AI ethics guidance, including FloridaCaliforniaNew YorkDC as well as the US Patent and Trademark Office. The Report notes that the practice of law is “poised for substantial transformation due to AI,” adding that while the full extent of this transformation remains to be seen, attorneys must keep abreast of and adapt to evolving technological landscapes and embrace opportunities for innovation and specialization in emerging AI-related legal domains.

The Task Force included four workgroups, including: i) Artificial Intelligence and Social Justice Concerns; ii) Artificial Intelligence Products and Services; iii) Education and CLE Programming; and iv) Ethics and Regulatory Issues. Each workgroup made findings and recommendations, some of which are provided below (while trying to avoid duplicating what other bar associations have addressed). Additionally, the Report includes some practical tools including guidance on Essential Factors for Selecting AI Products and Formulating an AI Policy in Legal Firms, provides a Sample Artificial Intelligence and Generative Artificial Intelligence Use Policy and Questions for Vendors When Selecting AI Products and Services, links to which are provided below.

The Report covers many of the expected topics with a focus on:

  • prioritizing AI education, establishing baseline procedures and guidelines, and collaborating with data privacy, cybersecurity, and AI professionals as needed;
  • adopting an AI policy to ensure the responsible integration of AI in legal practice and adherence to ethical and legal standards; and
  • the importance of social justice concerns related to the use of AI, including the importance of transparency in AI software algorithms, bias mitigation, and equitable access to AI tools and the need to review legal AI tools for fairness and accessibility, particularly tools designed for individuals from marginalized or vulnerable communities.

Some of the findings and recommendations are set forth below."