Sunday, March 24, 2024

Generative AI could leave users holding the bag for copyright violations; The Conversation, March 22, 2024

Professor of Information Systems, Michigan State University , The Conversation; ; Generative AI could leave users holding the bag for copyright violations

"How to build guardrails

Legal scholars have dubbed the challenge in developing guardrails against copyright infringement into AI tools the “Snoopy problem.” The more a copyrighted work is protecting a likeness – for example, the cartoon character Snoopy – the more likely it is a generative AI tool will copy it compared to copying a specific image."

Saturday, March 23, 2024

Tennessee becomes the first state to protect musicians and other artists against AI; NPR, March 22, 2024

  Rebecca Rosman, NPR; Tennessee becomes the first state to protect musicians and other artists against AI

"Tennessee made history on Thursday, becoming the first U.S. state to sign off on legislation to protect musicians from unauthorized artificial intelligence impersonation.

"Tennessee (sic) is the music capital of the world, & we're leading the nation with historic protections for TN artists & songwriters against emerging AI technology," Gov. Bill Lee announced on social media.

The Ensuring Likeness Voice and Image Security Act, or ELVIS Act, is an updated version of the state's old right of publicity law. While the old law protected an artist's name, photograph or likeness, the new legislation includes AI-specific protections."

Thursday, March 21, 2024

Canada moves to protect coral reef that scientists say ‘shouldn’t exist’; The Guardian, March 15, 2024

, The Guardian; Canada moves to protect coral reef that scientists say ‘shouldn’t exist’

"For generations, members of the Kitasoo Xai’xais and Heiltsuk First Nations, two communities off the Central Coast region of British Columbia, had noticed large groups of rockfish congregating in a fjord system.

In 2021, researchers and the First Nations, in collaboration with the Canadian government, deployed a remote-controlled submersible to probe the depths of the Finlayson Channel, about 300 miles north-west of Vancouver.

On the last of nearly 20 dives, the team made a startling discovery – one that has only recently been made public...

The discovery marks the latest in a string of instances in which Indigenous knowledge has directed researchers to areas of scientific or historic importance. More than a decade ago, Inuk oral historian Louie Kamookak compared Inuit stories with explorers’ logbooks and journals to help locate Sir John Franklin’s lost ships, HMS Erebus and HMS Terror. In 2014, divers located the wreck of the Erebus in a spot Kamookak suggested they search, and using his directions found the Terror two years later."

Wednesday, March 20, 2024

Google hit with $270M fine in France as authority finds news publishers’ data was used for Gemini; TechCrunch, March 20, 2024

Natasha LomasRomain Dillet , TechCrunch; Google hit with $270M fine in France as authority finds news publishers’ data was used for Gemini

"In a never-ending saga between Google and France’s competition authority over copyright protections for news snippets, the Autorité de la Concurrence announced a €250 million fine against the tech giant Wednesday (around $270 million at today’s exchange rate).

According to the competition watchdog, Google disregarded some of its previous commitments with news publishers. But the decision is especially notable because it drops something else that’s bang up-to-date — by latching onto Google’s use of news publishers’ content to train its generative AI model Bard/Gemini.

The competition authority has found fault with Google for failing to notify news publishers of this GenAI use of their copyrighted content. This is in light of earlier commitments Google made which are aimed at ensuring it undertakes fair payment talks with publishers over reuse of their content."

Thursday, March 14, 2024

C-e-a-s-e and desist: 'The New York Times' goes after Wordle spinoffs; NPR, March 13, 2024

 , NPR; C-e-a-s-e and desist: 'The New York Times' goes after Wordle spinoffs

"The New York Times has sent takedown notices to "hundreds" of coders who've made clones of the popular word game, Wordle.

Wordle is a hit online sensation where players have to guess a five-letter word in six tries. Since the newspaper bought it in 2022 from creator Welsh software engineer Josh Wardle, the word game has spawned a litany of spinoffs, from the more complex Quordle to the irreverent Sweardle.

Now, the Times is accusing some Wordle clone creators of copyright infringement violations and asking that their code be removed from the website GitHuba platform that lets developers publicly share their code. The news was first reported last week by 404 Media."

U.S. Copyright Office and USPTO Conclude Joint Study on Non-Fungible Tokens; U.S. Copyright Office, March 12, 2024

U.S. Copyright Office; U.S. Copyright Office and USPTO Conclude Joint Study on Non-Fungible Tokens

"Today, the U.S. Copyright Office and the U.S. Patent and Trademark Office (USPTO) (collectively the “Offices”) published the results of their joint study on the intellectual property (IP) law and policy implications of non-fungible tokens (NFTs). The Offices conducted the study in response to a June 2022 request from then-Chair of the Senate Committee on the Judiciary Subcommittee on Intellectual Property Patrick Leahy and Ranking Member Thom Tillis.

During the joint study, the Offices solicited public comments via a notice of inquiry, held three public roundtables, and examined existing literature and case law. 

In their report, the Offices acknowledged commenters’ views that NFTs may enable artists to secure remuneration for downstream resales of their works; aid trademark owners in expanding their brand appeal; and play a supportive role in the management, transfer, or licensing of IP rights. They also recognized concerns that buyers and sellers do not know what IP rights are implicated in the creation, marketing, and transfer of NFTs and that NFTs may be used to facilitate copyright or trademark infringement. The Offices concluded, however, that existing statutory enforcement mechanisms are currently sufficient to address infringement concerns related to NFT applications and that changes to IP laws, or to the Offices’ registration and recordation practices, are not necessary or advisable at this time. Rather, public education initiatives and product transparency play an important role in ensuring greater awareness and understanding about NFTs.

“We are pleased to share the results of our joint study with Congress, stakeholders, and the public,” said Shira Perlmutter, Register of Copyrights and Director of the U.S. Copyright Office. “The report reflects extensive input from a broad spectrum of commenters, including creators, brand owners, innovators, academics, and practitioners. We look forward to continuing to engage with stakeholders on emerging technologies and implications for IP rights.”

“NFTs offer unique opportunities for creators to leverage their IP rights but also present new challenges in keeping their work secure,” said Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO. “At the USPTO, we continue to work side-by-side with industry and government collaborators such as the Copyright Office to better understand the IP implications of these evolving technologies through initiatives such as our Artificial Intelligence (AI) and Emerging Technologies (ET) Partnership. We look forward to continuing these efforts and our ongoing work to ensure USPTO’s practices and U.S. policy evolve to address emerging technologies so that we best serve the needs of our nation’s creators and innovators.” 

The full study is available on the Copyright Office’s website and the USPTO’s website."

Moral Rights of the Artist (Humans Only): an Updated US Perspective; Center for Art Law Inc, January 12, 2024

 Irina Tarsis , Center for Art Law IncMoral Rights of the Artist (Humans Only): an Updated US Perspective

"I Defining and codifying moral rights

Moral rights, or droit moral (having originated in France), describe rights of creators in their artistic work that are not necessarily pecuniary, yet still integral to and arise from the idea that an artist's very being is included in the work that they create. Recognition and evolution of visual artists' rights in the United States have been slow to develop, and the scope of moral rights enacted in the United States is limited.

Typically, moral rights are neither alienable nor waivable; they last for the duration of an artist's lifetime and can survive for the benefit and discretion of an artist's estate even after the original work is finished or changes ownership through the stream of public commerce.18 The basic moral rights are as follows:

  1. right of attribution or authorship, which entitles the artist to:
    • be recognised by name for their work or permit the work to be published anonymously;
    • prevent a wrong person being named as the author of their work;
    • prevent having their name be associated with a work that they did not create;
    • decline having their name be associated with a work that has been modified or distorted in such a way as having the authorship remain with the work is prejudicial to the artist; and
    • remove their name from the work in cases of mutilation or the artist's belief that the work is no longer true to its original creation; and
  2. right of integrity, which prevents tampering or modifying the artwork without the artist's consent even after ownership in the artwork transfers;
  3. right of disclosure, which concerns the artist's reputation and provides that the artist has discretion to decide when and how their work can be made public; and
  4. resale royalty rights, which is a semi-economic right assuring that an artist may continue to benefit financially from commercial appreciation of their work in the secondary market by receiving a percentage of the sale proceeds.

These rights are enumerated in the Berne Convention for the Protection of Literary and Artistic Works (the Convention) under Articles 6 bis19 and 14 ter.20 While the US is a signatory to the Convention,21 according to the US Berne Convention Implementation Act of 1988, the Convention's acts and protocols are not self-executing under the US Constitution, and they must be implemented through US legislation...

ii Space, the final frontier: back to the future

Statutorily, moral rights of artists in the United States are poorly protected and narrowly enforced; some seemingly substantive claims are dismissed on procedure failure to state a claim,114 as res judicata115 or on industrial design grounds.116 As soon as US artists die, the moral rights evaporate altogether. The cavalier rather than chivalrous attitude towards the vision and the will of the artist persists as tastes and emphasis change. Consider, for example, museums deaccessioning art that was once donated directly by an artist.117 Similarly, consider the trend of covering up or removing the New Deal-Era murals from schools and court houses,118 which is evocative of the problem posed by Serra's Tilted Arc in the 1980s. Until the US Supreme Court reviews a VARA case, or Congress amends the Copyright Law to include a resale royalty provision, artists and their advocates remain limited by the available moral rights protections, and they have to be creative in using public and private law to protect artists' rights. For digital artists, the ability to enforce some of their moral rights is becoming easier, at least on the screen.

As protest art and street art become mainstream, and AI-generated art attracts more fans, in real life and in the metaverse,119 moral rights of artists in the United States are still protected by a patchwork of case law, contracts and state and federal regulations. During the 'Some Like it Digital: Meet Me in the Metaverse' webinars hosted by the Center for Art Law in 2022,120 a guest speaker mentioned that the constitutional law for the metaverse is, or will be, the Copyright Law, for better or worse.

Going forward, will there be more artists expressing themselves in the other verse? In space, outside the boundaries of planet Earth, where their moral right will still need to be protected, for now only against other humans?

In Thaler, District of Columbia Court asked 'Must … originator be a human being to claim copyright protection?' and answered 'yes'; with a footnote: 'The issue of whether non-human sentient beings may be covered by “person” in the Copyright Act is only “fun conjecture for academics”',121 though useful in illuminating the purposes and limits of copyright protection as AI is increasingly employed. Nonetheless, delving into this debate is an unnecessary detour because “[t]he day sentient refugees from some intergalactic war arrive on Earth and are granted asylum in Iceland, copyright law will be the least of our problems”.122 With human-generated art being sent into space, though, the subject of protecting artists' rights (freedom of expression, attribution, etc) will increasingly abut against international treaties controlling the parameters of sending objects into and outside of our solar system, onto other planets and for display on space stations."

Monday, March 11, 2024

Japan's universities fail to make the most of intellectual property; Nikkei Asia, March 9, 2024

KENJIRO SUZUKI, Nikkei Asia; Japan's universities fail to make the most of intellectual property

"Due to lack of support, patents earn only 2% compared to U.S. schools"

Nvidia sued over AI training data as copyright clashes continue; Ars Technica, March 11, 2024

  , Ars Technica Nvidia sued over AI training data as copyright clashes continue

"Book authors are suing Nvidia, alleging that the chipmaker's AI platform NeMo—used to power customized chatbots—was trained on a controversial dataset that illegally copied and distributed their books without their consent.

In a proposed class action, novelists Abdi Nazemian (Like a Love Story), Brian Keene (Ghost Walk), and Stewart O’Nan (Last Night at the Lobster) argued that Nvidia should pay damages and destroy all copies of the Books3 dataset used to power NeMo large language models (LLMs).

The Books3 dataset, novelists argued, copied "all of Bibliotek," a shadow library of approximately 196,640 pirated books. Initially shared through the AI community Hugging Face, the Books3 dataset today "is defunct and no longer accessible due to reported copyright infringement," the Hugging Face website says.

According to the authors, Hugging Face removed the dataset last October, but not before AI companies like Nvidia grabbed it and "made multiple copies." By training NeMo models on this dataset, the authors alleged that Nvidia "violated their exclusive rights under the Copyright Act." The authors argued that the US district court in San Francisco must intervene and stop Nvidia because the company "has continued to make copies of the Infringed Works for training other models.""

Thursday, March 7, 2024

Introducing CopyrightCatcher, the first Copyright Detection API for LLMs; Patronus AI, March 6, 2024

Patronus AI; Introducing CopyrightCatcher, thefirst Copyright Detection API for LLMs

"Managing risks from unintended copyright infringement in LLM outputs should be a central focus for companies deploying LLMs in production.

  • On an adversarial copyright test designed by Patronus AI researchers, we found that state-of-the-art LLMs generate copyrighted content at an alarmingly high rate 😱
  • OpenAI’s GPT-4 produced copyrighted content on 44% of the prompts.
  • Mistral’s Mixtral-8x7B-Instruct-v0.1 produced copyrighted content on 22% of the prompts.
  • Anthropic’s Claude-2.1 produced copyrighted content on 8% of the prompts.
  • Meta’s Llama-2-70b-chat produced copyrighted content on 10% of the prompts.
  • Check out CopyrightCatcher, our solution to detect potential copyright violations in LLMs. Here’s the public demo, with open source model inference powered by Databricks Foundation Model APIs. 🔥

LLM training data often contains copyrighted works, and it is pretty easy to get an LLM to generate exact reproductions from these texts1. It is critical to catch these reproductions, since they pose significant legal and reputational risks for companies that build and use LLMs in production systems2. OpenAI, Anthropic, and Microsoft have all faced copyright lawsuits on LLM generations from authors3, music publishers4, and more recently, the New York Times5.

To check whether LLMs respond to your prompts with copyrighted text, you can use CopyrightCatcher. It detects when LLMs generate exact reproductions of content from text sources like books, and highlights any copyrighted text in LLM outputs. Check out our public CopyrightCatcher demo here!

Researchers tested leading AI models for copyright infringement using popular books, and GPT-4 performed worst; CNBC, March 6, 2024

Hayden Field, CNBC; Researchers tested leading AI models for copyright infringement using popular books, and GPT-4 performed worst

"The company, founded by ex-Meta researchers, specializes in evaluation and testing for large language models — the technology behind generative AI products.

Alongside the release of its new tool, CopyrightCatcher, Patronus AI released results of an adversarial test meant to showcase how often four leading AI models respond to user queries using copyrighted text.

The four models it tested were OpenAI’s GPT-4, Anthropic’s Claude 2, Meta’s Llama 2 and Mistral AI’s Mixtral.

“We pretty much found copyrighted content across the board, across all models that we evaluated, whether it’s open source or closed source,” Rebecca Qian, Patronus AI’s cofounder and CTO, who previously worked on responsible AI research at Meta, told CNBC in an interview.

Qian added, “Perhaps what was surprising is that we found that OpenAI’s GPT-4, which is arguably the most powerful model that’s being used by a lot of companies and also individual developers, produced copyrighted content on 44% of prompts that we constructed.”"

Public Symposium on AI and IP; United States Patent and Trademark Office (USPTO), Wednesday, March 27, 2024 10 AM - 3 PM PT/1 PM - 6 PM ET

United States Patent and Trademark Office (USPTO); Public Symposium on AI and IP

"The United States Patent and Trademark Office (USPTO) Artificial Intelligence (AI) and Emerging Technologies (ET) Partnership will hold a public symposium on intellectual property (IP) and AI. The event will take place virtually and in-person at Loyola Law School, Loyola Marymount University, in Los Angeles, California, on March 27, from 10 a.m. to 3 p.m. PT. 

The symposium will facilitate the USPTO’s efforts to implement its obligations under the President’s Executive Order (E.O.) 14110 “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The event will include representation from the Copyright Office, build on previous AI/Emerging Technologies (ET) partnership events, and feature panel discussions by experts in the field of patent, trademark, and copyright law that focus on:

  1. A comparison of copyright and patent law approaches to the type and level of human contribution needed to satisfy authorship and inventorship requirements;
  2. Ongoing copyright litigation involving generative AI; and 
  3. A discussion of laws and policy considerations surrounding name, image, and likeness (NIL) issues, including the intersection of NIL and generative AI.

This event is free and open to the public, but in-person attendance is limited, so register early"

Wednesday, March 6, 2024

Pitt ranks in top 20 for patents granted to universities worldwide; PittWire, February 15, 2024

Mike Yeomans, PittWire; Pitt ranks in top 20 for patents granted to universities worldwide

"The University of Pittsburgh has ranked No. 19 of the Top 100 Worldwide Universities Granted Utility Patents in 2023, according to a list published by the National Academy of Inventors (NAI).

Released annually by the NAI since 2013, the Top 100 Worldwide Universities List spotlights the universities holding U.S. utility patents to showcase the important research and innovation taking place within academic institutions.

Pitt innovators were issued 114 U.S. patents in calendar year 2023, compared to 105 in 2022."n

Thursday, February 29, 2024

The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement; The Guardian, February 28, 2024

, The Guardian ; The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement

"OpenAI and Microsoft are facing a fresh round of lawsuits from news publishers over allegations that their generative artificial intelligence products violated copyright laws and illegally trained by using journalists’ work. Three progressive US outlets – the Intercept, Raw Story and AlterNet – filed suits in Manhattan federal court on Wednesday, demanding compensation from the tech companies.

The news outlets claim that the companies in effect plagiarized copyright-protected articles to develop and operate ChatGPT, which has become OpenAI’s most prominent generative AI tool. They allege that ChatGPT was trained not to respect copyright, ignores proper attribution and fails to notify users when the service’s answers are generated using journalists’ protected work."

Monday, February 19, 2024

George Santos sues Jimmy Kimmel for copyright infringement over Cameo videos used on his show; BoingBoing, February 18, 2024

 , BoingBoing; George Santos sues Jimmy Kimmel for copyright infringement over Cameo videos used on his show

"George Santos, the disgraced Republican lawmaker and alleged criminal finally ejected from Congress as he awaits trial, is suing late-night host Jimmy Kimmel. Santos takes money to say what you like on the video platform Cameo; Kimmel paid him to say ridiculous things that were then broadcast on his show."

Sunday, February 18, 2024

No Fair Use for Photo Used Without Required Attribution; The National Law Review, February 15, 2024

Timothy M. Dunker of McDermott Will & Emery, The National Law Review; No Fair Use for Photo Used Without Required Attribution

"The US Court of Appeals for the Fourth Circuit concluded that the copyright on a photograph of an entertainment icon was the subject of a valid copyright registration and that use of the photograph in an article missing the author’s required attribution language was not otherwise “fair use.” Philpot v. Independent Journal Review, Case No. 21-2021 (4th Cir. Feb. 6, 2024) (King, Wynn, Rushing, JJ.)

Larry Philpot, a professional concert photographer, photographed Ted Nugent at a concert in July 2013. In August 2013, Philpot registered the photograph with the US Copyright Office and published the photograph on Wiki Commons under a Creative Commons License specifying that anyone could use the photograph for free as long as they provided the following attribution: “Photo Credit: Larry Philpot of www.soundstagephotography.com.”"

The Death of the Litmus Test; Chicago-Kent Intellectual Property Journal, December 20, 2023

Dale Cendali, Abbey Quigley , Chicago-Kent Intellectual Property Journal; The Death of the Litmus Test

Saturday, February 17, 2024

The New York Times’ AI copyright lawsuit shows that forgiveness might not be better than permission; The Conversation, February 13, 2024

Senior Lecturer, Nottingham Law School, Nottingham Trent University, The Conversation; ; The New York Times’ AI copyright lawsuit shows that forgiveness might not be better than permission

"The lawsuit also presents a novel argument – not advanced by other, similar cases – that’s related to something called “hallucinations”, where AI systems generate false or misleading information but present it as fact. This argument could in fact be one of the most potent in the case.

The NYT case in particular raises three interesting takes on the usual approach. First, that due to their reputation for trustworthy news and information, NYT content has enhanced value and desirability as training data for use in AI. 

Second, that due to its paywall, the reproduction of articles on request is commercially damaging. Third, that ChatGPT “hallucinations” are causing reputational damage to the New York Times through, effectively, false attribution. 

This is not just another generative AI copyright dispute. The first argument presented by the NYT is that the training data used by OpenAI is protected by copyright, and so they claim the training phase of ChatGPT infringed copyright. We have seen this type of argument run before in other disputes."

Friday, February 16, 2024

How AI copyright lawsuits could make the whole industry go extinct; The Verge, February 15, 2024

Nilay Patel, The Verge ; How AI copyright lawsuits could make the whole industry go extinct

"Our new Thursday episodes of Decoder are all about deep dives into big topics in the news, and for the next few weeks, we’re going to stay focused on one of the biggest topics of all: generative AI. 

There’s a lot going on in the world of generative AI, but maybe the biggest is the increasing number of copyright lawsuits being filed against AI companies like OpenAI and Stability AI. So for this episode, we brought on Verge features editor Sarah Jeong, who’s a former lawyer just like me, and we’re going to talk about those cases and the main defense the AI companies are relying on in those copyright cases: an idea called fair use."


How to Think About Remedies in the Generative AI Copyright Cases; LawFare, February 15, 2024

 Pamela Samuelson, LawFare; How to Think About Remedies in the Generative AI Copyright Cases

"So far, commentators have paid virtually no attention to the remedies being sought in the generative AI copyright complaints. This piece shines a light on them."

Thursday, February 15, 2024

Judge rejects most ChatGPT copyright claims from book authors; Ars Technica, February 13, 2024

 , Ars Technica; Judge rejects most ChatGPT copyright claims from book authors

"A US district judge in California has largely sided with OpenAI, dismissing the majority of claims raised by authors alleging that large language models powering ChatGPT were illegally trained on pirated copies of their books without their permission."

Monday, February 12, 2024

AI and inventorship guidance: Incentivizing human ingenuity and investment in AI-assisted inventions; United States Patent and Trademark Office (USPTO), February 12, 2024

Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO, United States Patent and Trademark Office (USPTO) ; Director's Blog: the latest from USPTO leadership

AI and inventorship guidance: Incentivizing human ingenuity and investment in AI-assisted inventions

"Today, based on the exceptional public feedback we’ve received, we announced our Inventorship Guidance for AI-Assisted Inventions in the Federal Register – the first of these directives. The guidance, which is effective on February 13, 2024, provides instructions to examiners and stakeholders on how to determine whether the human contribution to an innovation is significant enough to qualify for a patent when AI also contributed. The guidance embraces the use of AI in innovation and provides that AI-assisted inventions are not categorically unpatentable. The guidance instructs examiners on how to determine the correct inventor(s) to be named in a patent or patent application for inventions created by humans with the assistance of one or more AI systems. Additionally, we’ve posted specific examples of hypothetical situations and how the guidance would apply to those situations to further assist our examiners and applicants in their understanding."

Inventorship guidance for AI-assisted inventions webinar; United States Patent and Trademark Office (USPTO), March 5, 2024 1 PM - 2 PM ET

United States Patent and Trademark Office (USPTO) ; Inventorship guidance for AI-assisted inventions webinar

"The United States Patent and Trademark Office (USPTO) plays an important role in incentivizing and protecting innovation, including innovation enabled by artificial intelligence (AI), to ensure continued U.S. leadership in AI and other emerging technologies (ET).

The USPTO announced Inventorship Guidance for AI-Assisted Inventions in the Federal RegisterThis guidance is pursuant to President Biden's Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023) with provisions addressing IP equities. The guidance, which is effective on February 13, 2024, provides instructions to USPTO personnel and stakeholders on determining the correct inventor(s) to be named in a patent or patent application for inventions created by humans with the assistance of one or more AI systems. 

The USPTO will host a webinar on Inventorship Guidance for AI-Assisted Inventions on Tuesday, March 5, from 1-2 p.m. EST. USPTO personnel will provide an overview of the guidance and answer stakeholder questions relating to the guidance.

This event is free and open to the public, but virtual space is limited, so please register early."


On Copyright, Creativity, and Compensation; Reason, February 12, 2024

, Reason; On Copyright, Creativity, and Compensation

"Some of you may have seen the article by David Segal in the Sunday NY Times several weeks ago [available here] about a rather sordid copyright fracas in which I have been embroiled over the past few months...

What to make of all this? I am not oblivious to the irony of being confronted with this problem after having spent 30 years or so, as a lawyer and law professor, reflecting on and writing about the many mysteries of copyright policy and copyright law in the Internet Age.

Here are a few things that strike me as interesting (and possibly important) in this episode."

Wednesday, February 7, 2024

EU countries strike deal on landmark AI rulebook; Politico, February 2, 2024

 GIAN VOLPICELLI, Politico ; EU countries strike deal on landmark AI rulebook

"European Union member countries on Friday unanimously reached a deal on the bloc’s Artificial Intelligence Act, overcoming last-minute fears that the rulebook would stifle European innovation.

EU deputy ambassadors green-lighted the final compromise text, hashed out following lengthy negotiations between representatives of the Council, members of the European Parliament and European Commission officials...

Over the past few weeks, the bloc’s top economies Germany and France, alongside Austria, hinted that they might oppose the text in Friday’s vote...

Eventually, the matter was resolved through the EU’s familiar blend of PR offensive and diplomatic maneuvering. The Commission ramped up the pressure by announcing a splashy package of pro-innovation measures targeting the AI sector, and in one fell swoop created the EU’s Artificial Intelligence Office — a body tasked with enforcing the AI Act...

A spokesperson for German Digital Minister Volker Wissing, the foremost AI Act skeptic within Germany’s coalition government, told POLITICO: "We asked the EU Commission to clarify that the AI Act does not apply to the use of AI in medical devices.".

A statement the European Commission, circulated among EU diplomats ahead of the vote and seen by POLITICO, reveals plans to set up an “expert group” comprising  EU member countries’ authorities. The group’s function will be to “ advise and assist” the Commission in applying and implementing the AI Act...

The AI Act still needs the formal approval of the European Parliament. The text is slated to get rubber-stamped at the committee level in two weeks, with a plenary vote expected in April."