Friday, June 20, 2025

Two Major Lawsuits Aim to Answer a Multi-Billion-Dollar Question: Can AI Train on Your Creative Work Without Permission?; The National Law Review, June 18, 2025

 Andrew R. LeeTimothy P. Scanlan, Jr. of Jones Walker LLP , The National Law Review; Two Major Lawsuits Aim to Answer a Multi-Billion-Dollar Question: Can AI Train on Your Creative Work Without Permission?

"In a London courtroom, lawyers faced off in early June in a legal battle that could shape the future relationship between artificial intelligence and creative work. The case pits Getty Images, a major provider of stock photography, against Stability AI, the company behind the popular AI art generator, Stable Diffusion.

At the heart of the dispute is Getty's claim that Stability AI unlawfully used 12 million of its copyrighted images to train its AI model. The outcome of this case could establish a critical precedent for whether AI companies can use publicly available online content for training data or if they will be required to license it.

On the first day of trial, Getty's lawyer told the London High Court that the company “recognises that the AI industry overall may be a force for good,” but that did not justify AI companies “riding roughshod over intellectual property rights.”

A Key Piece of Evidence

A central component of Getty's case is the observation that Stable Diffusion's output sometimes includes distorted versions of the Getty Images watermark. Getty argues this suggests its images were not only used for training but are also being partially reproduced by the AI model.

Stability AI has taken the position that training an AI model on images constitutes a transformative use of that data. The argument is that teaching a machine from existing information is fundamentally different from direct copying."

Thursday, June 19, 2025

The Fight Over AI Is Just Beginning, and Artists Like Elton John Are Leading the Way; Billboard, June 18, 2025

ROBERT LEVINE, Billboard ;The  Fight Over AI Is Just Beginning, and Artists Like Elton John Are Leading the Way

Five Months into the Trump Presidency: Charting the latest offensives against libraries and how advocates are pushing back; American Libraries, June 18, 2025

 Hannah Weinberg  , American Libraries; Five Months into the Trump Presidency: Charting the latest offensives against libraries and how advocates are pushing back

"Since our last report, libraries have continued to experience significant upheaval from President Trump’s actions. In May, the Trump administration fired Librarian of Congress Carla Hayden and Register of Copyrights Shira Perlmutter. We also saw legal cases challenging the administration’s defunding of the Institute of Museum and Library Services (IMLS) continue to make their way through the courts in May and June. Meanwhile, library advocates contacted their legislators to fight for federal library funding in fiscal year (FY) 2026.

Here are several updates on the attacks against libraries across the US and the ways in which library supporters are pushing back."

Wednesday, June 18, 2025

AI copyright anxiety will hold back creativity; MIT Technology Review, June 17, 2025

 

, MIT Technology Review; AI copyright anxiety will hold back creativity

"Who, exactly, owns the outputs of a generative model? The user who crafted the prompt? The developer who built the model? The artists whose works were ingested to train it? Will the social forces that shape artistic standing—critics, curators, tastemakers—still hold sway? Or will a new, AI-era hierarchy emerge? If every artist has always borrowed from others, is AI’s generative recombination really so different? And in such a litigious culture, how long can copyright law hold its current form? The US Copyright Office has begun to tackle the thorny issues of ownership and says that generative outputs can be copyrighted if they are sufficiently human-authored. But it is playing catch-up in a rapidly evolving field.

Different industries are responding in different ways...

I don’t consider this essay to be great art. But I should be transparent: I relied extensively on ChatGPT while drafting it...

Many people today remain uneasy about using these tools. They worry it’s cheating, or feel embarrassed to admit that they’ve sought such help...

I recognize the counterargument, notably put forward by Nicholas Thompson, CEO of the Atlantic: that content produced with AI assistance should not be eligible for copyright protection, because it blurs the boundaries of authorship. I understand the instinct. AI recombines vast corpora of preexisting work, and the results can feel derivative or machine-like.

But when I reflect on the history of creativity—van Gogh reworking Eisen, Dalí channeling Bruegel, Sheeran defending common musical DNA—I’m reminded that recombination has always been central to creation. The economist Joseph Schumpeter famously wrote that innovation is less about invention than “the novel reassembly of existing ideas.” If we tried to trace and assign ownership to every prior influence, we’d grind creativity to a halt." 

Tuesday, June 17, 2025

ED SHEERAN’S ‘THINKING OUT LOUD’ COPYRIGHT LAWSUIT WON’T GO TO SUPREME COURT; Rolling Stone, June 16, 2025

 TOMÁS MIER, Rolling Stone; ED SHEERAN’S ‘THINKING OUT LOUD’ COPYRIGHT LAWSUIT WON’T GO TO SUPREME COURT

"Despite a plea from one of the people accusing Ed Sheeran of copying Marvin Gaye‘s “Let’s Get It On,” the Supreme Court will not be taking on a copyright case around Sheeran’s hit, “Thinking Out Loud.” On Monday, the high court refused to take on the lawsuit claiming that Sheeran infringed on the copyright of Gaye’s song.

“No reasonable jury could find that the two songs, taken as a whole, are substantially similar in light of their dissimilar melodies and lyrics,” Judge Michael Park wrote for the U.S. Court of Appeals for the Second Circuit, per USA Today."

Sunday, June 15, 2025

Elon Musk’s Tesla sues former Optimus robot engineer for allegedly stealing trade secrets; New York Post, June 12, 2025

Thomas Barrabi , New York Post; Elon Musk’s Tesla sues former Optimus robot engineer for allegedly stealing trade secrets

"Elon Musk’s Tesla is suing one of its former engineers for allegedly stealing trade secrets related to its highly anticipated Optimus humanoid robot.

The defendant is Zhongjie “Jay” Li, who cofounded the humanoid robot startup Proception Inc. after working at Tesla from Aug. 2022 to Sept. 2024, according to the complaint filed in San Francisco federal court on Wednesday.

The lawsuit alleges Li, who worked on “advanced robotic hand sensors—and was entrusted with some of the most sensitive technical data in the program,” downloaded Optimus files onto two smartphones."

AI chatbots need more books to learn from. These libraries are opening their stacks; AP, June 12, 2025

 MATT O’BRIEN, AP; AI chatbots need more books to learn from. These libraries are opening their stacks

"Supported by “unrestricted gifts” from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries and museums around the world on how to make their historic collections AI-ready in a way that also benefits the communities they serve.

“We’re trying to move some of the power from this current AI moment back to these institutions,” said Aristana Scourtas, who manages research at Harvard Law School’s Library Innovation Lab. “Librarians have always been the stewards of data and the stewards of information.

Harvard’s newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper. One of the earlier works is from the 1400s — a Korean painter’s handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organized by generations of librarians. 

It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems."

Saturday, June 14, 2025

Two men jailed for life for supplying car bomb that killed Daphne Caruana Galizia; The Guardian, June 10, 2025

 , The Guardian ; Two men jailed for life for supplying car bomb that killed Daphne Caruana Galizia


[Kip Currier: It's encouraging to see that justice can occur, even in places and situations where corruption is deeply entangled and seemingly intractable. I vividly remember learning from The Guardian's reporting about the horrific car bomb murder of courageous investigative journalist Daphne Caruana Galizia in Malta in October 2017:

The journalist who led the Panama Papers investigation into corruption in Malta was killed on Monday in a car bomb near her home.

Daphne Caruana Galizia died on Monday afternoon when her car, a Peugeot 108, was destroyed by a powerful explosive device which blew the vehicle into several pieces and threw the debris into a nearby field.

A blogger whose posts often attracted more readers than the combined circulation of the country’s newspapers, Caruana Galizia was recently described by the Politico website as a “one-woman WikiLeaks”. Her blogs were a thorn in the side of both the establishment and underworld figures that hold sway in Europe’s smallest member state.

Her most recent revelations pointed the finger at Malta’s prime minister, Joseph Muscat, and two of his closest aides, connecting offshore companies linked to the three men with the sale of Maltese passports and payments from the government of Azerbaijan.

https://www.theguardian.com/world/2017/oct/16/malta-car-bomb-kills-panama-papers-journalist

As mentioned in the 2017 article, Galizia was reporting about corruption that involved the Maltese government at the time. Journalists like Galizia risk -- and all too often lose -- their lives to expose corruption and promote public awareness and accountability for wrongdoing.

These intrepid reporters also shed important light on the ways that the wealthy, powerful, and famous are frequently able to circumvent laws and ethical standards that apply to everyone else, as was revealed by the Panama Papers investigation.

Non-profit groups like Transparency International are committed to exposing corruption and promoting democracy and accountability:

We are Transparency International U.S. (TI US), part of the world’s largest coalition against corruption. We give voices to victims and witnesses of corruption, and work with governments, businesses, and citizens to stop the abuse of entrusted power.

In collaboration with national chapters in more than 100 countries, we are leading the fight to turn our vision of a world free from corruption into reality. Our U.S. office focuses on stemming the harms caused by illicit finance, strengthening political integrity, and promoting a positive U.S. role in global anti-corruption initiatives. Through a combination of research, advocacy, and policy, we engage with stakeholders to increase public understanding of corruption and hold institutions and individuals accountable.

https://us.transparency.org/who-we-are/]

My forthcoming Bloomsbury book Ethics, Information, and Technology (January 2026) examines the corrosive impacts of corruption. It also explores organizations like Transparency International that report on and educate about corrupt practices, as well as efforts to root out public trust-damaging activities and positively influence and change organizational cultures where corruption exists.

Corruption is often intertwined, too, with other ethical issues like conflicts of interest, censorship, research misconduct, misinformation and disinformation, counterfeit goods and deficits of transparency, accountability, data integrity, freedom of expression, and free and independent presses, which are critically assessed and considered in the book.]


[Excerpt]

"Two men have been sentenced to life in prison for supplying the car bomb that killed the anti-corruption journalist Daphne Caruana Galizia in Malta eight years ago.

The sentencing on Tuesday of Robert Agius and Jamie Vella, reported to be members of the island’s criminal underworld, marked a significant step in the long campaign to bring those charged with Caruana Galizia’s murder to justice.

Her death in October 2017 sparked outrage across Europe and embroiled Malta’s governing party in accusations of a coverup, ultimately leading to the resignation of the then prime minister, Joseph Muscat.

Prosecutors have brought charges against seven people, including a millionaire businessman who is still awaiting trial."

What Swift fan accounts should know about copyright after Barstool's 'Taylor Watch' canceled; USA TODAY, June 12, 2025

Bryan WestNashville Tennessean, USA TODAY; What Swift fan accounts should know about copyright after Barstool's 'Taylor Watch' canceled

""'Taylor Watch' is canceled," Keegs said on the 150th episode, "because having a music related podcast or something that can toe the line with lawsuits in general where it comes to music rights, whatever, is just not feasible with Barstool Sports at this time."

One underlying issue lies in copyrighted photos, videos and music being used on social media. Several posts potentially opened parent company Barstool Sports to lawsuits, and the podcasters had two options: to cancel "Taylor Watch" or be fired."

Friday, June 13, 2025

How Disney’s AI lawsuit could shift the future of entertainment; The Washington Post, June 11, 2025

 

, The Washington Post ; How Disney’s AI lawsuit could shift the future of entertainment

"The battle over the future of AI-generated content escalated on Wednesday as two Hollywood titans sued a fast-growing AI start-up for copyright infringement.

Disney and Universal, whose entertainment empires include Pixar, Star Wars, Marvel and Despicable Me, sued Midjourney, claiming it wrongfully trained its image-generating AI models on the studios’ intellectual property.

They are the first major Hollywood studios to file copyright infringement lawsuits, marking a pivotal moment in the ongoing fight by artists, newspapers and content makers to stop AI firms from using their work as training data — or at least make them pay for it."

Thursday, June 12, 2025

In first-of-its-kind lawsuit, Hollywood giants sue AI firm for copyright infringement; NPR, June 12, 2025

 , NPR; In first-of-its-kind lawsuit, Hollywood giants sue AI firm for copyright infringement

"n a first-of-its-kind lawsuit, entertainment companies Disney and Universal are suing AI firm Midjourney for copyright infringement.

The 110-page lawsuit, filed Wednesday in a U.S. district court in Los Angeles, includes detailed appendices illustrating the plaintiffs' claims with visual examples and alleges that Midjourney stole "countless" copyrighted works to train its AI engine in the creation of AI-generated images."

Wednesday, June 11, 2025

Disney, Universal File First Major Studio Lawsuit Against AI Company, Sue Midjourney for Copyright Infringement: ‘This Is Theft’; Variety, June 11, 2025

 Todd Spangler, Variety; Disney, Universal File First Major Studio Lawsuit Against AI Company, Sue Midjourney for Copyright Infringement: ‘This Is Theft’

"Disney and NBCU filed a federal lawsuit Tuesday against Midjourney, a generative AI start-up, alleging copyright infringement. The companies alleged that Midjourney’s own website “displays hundreds, if not thousands, of images generated by its Image Service at the request of its subscribers that infringe Plaintiffs’ Copyrighted Works.”

A copy of the lawsuit is at this link...

Disney and NBCU’s lawsuit includes images alleged to be examples of instances of Midjourney’s infringement. Those include an image of Marvel’s Deadpool and Wolverine (pictured above), Iron Man, Spider-Man, the Hulk and more; Star Wars’ Darth Vader, Yoda, R2-D2, C-3PO and Chewbacca; Disney’s Princess Elsa and Olaf from “Frozen”; characters from “The Simpsons”; Pixar’s Buzz Lightyear from “Toy Story” and Lightning McQueen from “Cars”; DreamWorks’ “How to Train Your Dragon”; and Universal‘s “Shrek” and the yellow Minions from the “Despicable Me” film franchise."

Tuesday, June 10, 2025

Global AI: Compression, Complexity, and the Call for Rigorous Oversight; ABA SciTech Lawyer, May 9, 2025

Joan Rose Marie Bullock, ABA SciTech Lawyer; Global AI: Compression, Complexity, and the Call for Rigorous Oversight

"Equally critical is resisting haste. The push to deploy AI, whether in threat detection or data processing, often outpaces scrutiny. Rushed implementations, like untested algorithms in critical systems, can backfire, as any cybersecurity professional can attest from post-incident analyses. The maxim of “measure twice, cut once” applies here: thorough vetting trumps speed. Lawyers, trained in precedent, recognize the cost of acting without foresight; technologists, steeped in iterative testing, understand the value of validation. Prioritizing diligence over being first mitigates catastrophic failures of privacy breaches or security lapses that ripple worldwide."

Getty Images Faces Off Against Stability in Court as First Major AI Copyright Trial Begins; PetaPixel, June 10, 2025

Matt Growcoot , PetaPixel; Getty Images Faces Off Against Stability in Court as First Major AI Copyright Trial Begins

"The Guardian notes that the trial will focus on specific photos taken by famous photographers. Getty plans to bring up photos of the Chicago Cubs taken by sports photographer Gregory Shamus and photos of film director Christopher Nolan taken by Andreas Rentz. 

All-in-all, 78,000 pages of evidence have been disclosed for the case and AI experts are being called in to give testimonies. Getty is also suing Stability AI in the United States in a parallel case. The trial in London is expected to run for three weeks and will be followed by a written decision from the judge at a later date."

Lawyers face sanctions for citing fake cases with AI, warns UK judge; Reuters, June 6, 2025

 , Reuters; Lawyers face sanctions for citing fake cases with AI, warns UK judge

"Lawyers who use artificial intelligence to cite non-existent cases can be held in contempt of court or even face criminal charges, London's High Court warned on Friday, in the latest example of generative AI leading lawyers astray.

A senior judge lambasted lawyers in two cases who apparently used AI tools when preparing written arguments, which referred to fake case law, and called on regulators and industry leaders to ensure lawyers know their ethical obligations.

"There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused," Judge Victoria Sharp said in a written ruling...

She added that "in the most egregious cases, deliberately placing false material before the court with the intention of interfering with the administration of justice amounts to the common law criminal offence of perverting the course of justice"."

Monday, June 9, 2025

5 Dangerous Myths About AI Ethics You Shouldn’t Believe; Forbes, May 14, 2025

 Bernard Marr , Forbes; 5 Dangerous Myths About AI Ethics You Shouldn’t Believe

"AI can empower just about any business to innovate and drive efficiency, but it also has the potential to do damage and cause harm. This means that everyone putting it to use needs to understand the ethical frameworks in place to keep everyone safe.

At the end of the day, AI is a tool. AI ethics can be thought of as the safety warning you get in big letters at the front of any user manual, setting out some firm dos and don’ts about using it.

Using AI almost always involves making ethical choices. In a business setting, understanding the many ways it can affect people and culture means we have the best information for making those choices.

It’s a subject there's still a lot of confusion around, not least involving who is responsible and who should be ensuring this gets done. So here are five common misconceptions I come across involving the ethics of generative AI and machine learning."

Newsmaker: Brewster Kahle; American Libraries, June 4, 2025

 Anne Ford, American Libraries; Newsmaker: Brewster Kahle

"How has the work of the Internet Archive been affected since Trump took office?

Well, the biggest effect has been getting a lot of attention for what we do. We spend a lot of time on Democracy’s Library, which is a name for collecting all the born-digital and digitized publications of government at the federal, state, and municipal levels. There’s been so much attention about all of the [digital] takedowns that we’ve received lots and lots of volunteer help toward collecting not only web assets but also databases that are being removed from government websites. It’s all hands on deck.

And you just launched a new YouTube channel.

Yes, we unveiled our next-generation microfiche scanning as part of our Democracy’s Library project, because a lot of .gov sites are on microfiche, and people don’t want to use microfiche anymore. Fortunately, the US government in its early era was pro–access to information and made government documents public domain. So we put out a YouTube livestream of the microfiche being digitized.

What would you like to see libraries and librarians do during this challenging time?

We need libraries to have at least as good rights in the digital world as we have in the physical world. There’s an upcoming website [from the Internet Archive and others] called the Four Digital Rights of Libraries, and that is something libraries can sign onto as institutions. [The website will launch during the Association of European Research Libraries’ LIBER 2025 Conference in Lausanne, Switzerland, July 2-4.]

People generally don’t know that libraries, in this digital era, are prevented from buying any ebooks or MP3s. They are not allowed by the publishers to have them. They spend and spend and spend, but they don’t end up owning anything. They’re not building collections. So the publishers can change or delete anything at any time, and they do. In their dream case, libraries will never own anything ever again. This is a structural attack on libraries. You don’t need to be a deep historian to know what happens to libraries. They’re actively destroyed by the powerful.

So let’s spend [our collection budgets] buying ebooks, buying music, buying material from small publishers or anybody [else] that will actually sell to us. Make it so we are building our own collections, not this licensing thing where these books disappear.

That’s a big ask. But the great thing about that will be that our libraries start buying things from small publishers, where most of the money goes back to the authors, not stopping with the big multinational publishers. Let’s build a system that works for more players than just big corporations that make a habit of suing libraries."

Getty argues its landmark UK copyright case does not threaten AI; Reuters, June 9, 2025

, Reuters; Getty argues its landmark UK copyright case does not threaten AI

 "Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI began at London's High Court on Monday, with Getty rejecting Stability AI's contention the case posed a threat to the generative AI industry.

Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system, which can generate images from text inputs...

Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists.

Lawyers say Getty's case will have a major impact on the law, as well as potentially informing government policy on copyright protections relating to AI."

Sunday, June 8, 2025

OpenAI to appeal copyright ruling in NY Times case as Altman calls for 'AI privilege'; Foxbusiness, June 6, 2025

, Foxbusiness; OpenAI to appeal copyright ruling in NY Times case as Altman calls for 'AI privilege'

"The OpenAI co-founder said the case has accelerated the need for a conversation about "AI privilege," in which "talking to an AI should be like talking to a lawyer or a doctor.""

Former Librarian of Congress Dr. Carla Hayden speaks out about her firing by Trump; CBS, June 6, 2025

  CBS; Former Librarian of Congress Dr. Carla Hayden speaks out about her firing by Trump

"In this preview of an interview with national correspondent Robert Costa to be broadcast on "CBS Sunday Morning" June 8, Dr. Carla Hayden, the former Librarian of Congress fired by President Trump last month, talks for the first time about her abrupt dismissal, and the challenges facing her former institution – and libraries nationwide."

Saturday, June 7, 2025

UK government signals it will not force tech firms to disclose how they train AI; The Guardian, June 6, 2025

 and , The Guardian ; UK government signals it will not force tech firms to disclose how they train AI

"Opponents of the plans have warned that even if the attempts to insert clauses into the data bill fail, the government could be challenged in the courts over the proposed changes.

The consultation on copyright changes, which is due to produce its findings before the end of the year, contains four options: to let AI companies use copyrighted work without permission, alongside an option for artists to “opt out” of the process; to leave the situation unchanged; to require AI companies to seek licences for using copyrighted work; and to allow AI firms to use copyrighted work with no opt-out for creative companies and individuals.

The technology secretary, Peter Kyle, has said the copyright-waiver-plus-opt-out scenario is no longer the government’s preferred option, but Kidron’s amendments have attempted to head off that option by effectively requiring tech companies to seek licensing deals for any content that they use to train their AI models."

How AI and copyright turned into a political nightmare for Labour; Politico.eu, June 4, 2025

JOSEPH BAMBRIDGE , Politico.eu; How AI and copyright turned into a political nightmare for Labour

"The Data (Use and Access Bill) has ricocheted between the Commons and the Lords in an extraordinarily long incidence of ping-pong, with both Houses digging their heels in and a frenzied lobbying battle on all sides."

Do AI systems have moral status?; Brookings, June 4, 2025

  , Brookings; Do AI systems have moral status?

"In March, researchers announced that a large language model (LLM) passed the famous Turing test, a benchmark designed by computer scientist Alan Turing in 1950 to evaluate whether computers could think. This follows research from last year suggesting that the time is now for artificial intelligence (AI) labs to take the welfare of their AI models into account."

Friday, June 6, 2025

AI firms say they can’t respect copyright. These researchers tried.; The Washington Post, June 5, 2025

Analysis by  

with research by 
, The Washington Post; AI firms say they can’t respect copyright. These researchers tried.

"A group of more than two dozen AI researchers have found that they could build a massive eight-terabyte dataset using only text that was openly licensed or in public domain. They tested the dataset quality by using it to train a 7 billion parameter language model, which performed about as well as comparable industry efforts, such as Llama 2-7Bwhich Meta released in 2023.

paper published Thursday detailing their effort also reveals that the process was painstaking, arduous and impossible to fully automate.

The group built an AI model that is significantly smaller than the latest offered by OpenAI’s ChatGPT or Google’s Gemini, but their findings appear to represent the biggest, most transparent and rigorous effort yet to demonstrate a different way of building popular AI tools.

That could have implications for the policy debate swirling around AI and copyright.

The paper itself does not take a position on whether scraping text to train AI is fair use.

That debate has reignited in recent weeks with a high-profile lawsuit and dramatic turns around copyright law and enforcement in both the U.S. and U.K."

 

The U.S. Copyright Office used to be fairly low-drama. Not anymore; NPR, June 6, 2025

, NPR ; The U.S. Copyright Office used to be fairly low-drama. Not anymore

"The U.S. Copyright Office is normally a quiet place. It mostly exists to register materials for copyright and advise members of Congress on copyright issues. Experts and insiders used words like "stable" and "sleepy" to describe the agency. Not anymore...

Inside the AI report

That big bombshell report on generative AI and copyright can be summed up like this – in some instances, using copyrighted material to train AI models could count as fair use. In other cases, it wouldn't.

The conclusion of the report says this: "Various uses of copyrighted works in AI training are likely to be transformative. The extent to which they are fair, however, will depend on what works were used, from what source, for what purpose, and with what controls on the outputs—all of which can affect the market."

"It's very even keeled," said Keith Kupferschmid, CEO of the Copyright Alliance, a group that represents artists and publishers pushing for stronger copyright laws.

Kupferschmid said the report avoids generalizations and takes arguments on a case-by-case basis.

"Perlmutter was beloved, no matter whether you agreed with her or not, because she did the hard work," Kupferschmid said. "She always was very thoughtful and considered all these different viewpoints."

It remains to be seen how the report will be used in the dozens of legal cases over copyright and AI usage."

Thursday, June 5, 2025

Government AI copyright plan suffers fourth House of Lords defeat; BBC, June 2, 2025

Zoe Kleinman , BBC; Government AI copyright plan suffers fourth House of Lords defeat

"The argument is over how best to balance the demands of two huge industries: the tech and creative sectors. 

More specifically, it's about the fairest way to allow AI developers access to creative content in order to make better AI tools - without undermining the livelihoods of the people who make that content in the first place.

What's sparked it is the Data (Use and Access) Bill.

This proposed legislation was broadly expected to finish its long journey through parliament this week and sail off into the law books. 

Instead, it is currently stuck in limbo, ping-ponging between the House of Lords and the House of Commons.

A government consultation proposes AI developers should have access to all content unless its individual owners choose to opt out. 

But 242 members of the House of Lords disagree with the bill in its current form.

They think AI firms should be forced to disclose which copyrighted material they use to train their tools, with a view to licensing it."

Eminem Hits Meta With A Copyright Lawsuit After It Allegedly Misappropriated Hundreds Of His Songs; ABOVE THE LAW, June 4, 2025

 Chris Williams , ABOVE THE LAW; Eminem Hits Meta With A Copyright Lawsuit After It Allegedly Misappropriated Hundreds Of His Songs

"Don’t. Mess. With. Eminem. And if the events are as cut and dried as the complaint makes it seem, Meta is getting off easy with the $109M price tag. Meta of all companies should know that the only thing that can get away with brazenly stealing the work of wealthy hard-working artists without facing legal consequences is AI-scrapping software."

Tuesday, June 3, 2025

Artificial Intelligence—Promises and Perils for Humans’ Rights; Harvard Law School Human Rights Program, June 10, 2025 10:30 AM EDT

 Harvard Law School Human Rights Program; Artificial Intelligence—Promises and Perils for Humans’ Rights

"In recent years, rapid advances in Artificial Intelligence (AI) technology, significantly accelerated by the development and deployment of deep learning and Large Language Models, have taken center stage in policy discussions and public consciousness. Amidst a public both intrigued and apprehensive about AI’s transformative potential across workplaces, families, and even broader political, economic, and geopolitical structures, a crucial conversation is emerging around its ethical, legal, and policy dimensions.

This webinar will convene a panel of prominent experts from diverse fields to delve into the critical implications of AI for humans and their rights. The discussion will broadly address the anticipated human rights harms stemming from AI’s increasing integration into society and explore potential responses to these challenges. A key focus will be on the role of international law and human rights law in addressing these harms, considering whether this legal framework can offer the appropriate tools for effective intervention."

Emerging Issues in the Use of Generative AI: Ethics, Sanctions, and Beyond; The Federalist Society, June 3, 2025 12 PM EDT

 The Federalist Society; Emerging Issues in the Use of Generative AI: Ethics, Sanctions, and Beyond

"The idea of Artificial Intelligence has long presented potential challenges in the legal realm, and as AI tools become more broadly available and widely used, those potential hurdles are becoming ever more salient for lawyers in their day-to-day operations. Questions abound, from what potential risks of bias and error may exist in using an AI tool, to the challenges related to professional responsibility as traditionally understood, to the risks large language learning models pose to client confidentiality. Some contend that AI is a must-use, as it opens the door to faster, more efficient legal research that could equip lawyers to serve their clients more effectively. Others reject the use of AI, arguing that the risks of use and the work required to check the output it gives exceed its potential benefit.

Join us for a FedSoc Forum exploring the ethical and legal implications of artificial intelligence in the practice of law.

Featuring: 

  • Laurin H. Mills, Member, Werther & Mills, LLC
  • Philip A. Sechler, Senior Counsel, Alliance Defending Freedom
  • Prof. Eugene Volokh, Gary T. Schwartz Distinguished Professor of Law Emeritus, UCLA School of Law; Thomas M. Siebel Senior Fellow, Hoover Institution, Stanford University
  • (Moderator) Hon. Brantley Starr, District Judge, United States District Court for the Northern District of Texas"