Showing posts with label AI outputs. Show all posts
Showing posts with label AI outputs. Show all posts

Wednesday, July 9, 2025

Why the new rulings on AI copyright might actually be good news for publishers; Fast Company, July 9, 2025

PETE PACHAL, Fast Company; Why the new rulings on AI copyright might actually be good news for publishers

"The outcomes of both cases were more mixed than the headlines suggest, and they are also deeply instructive. Far from closing the door on copyright holders, they point to places where litigants might find a key...

Taken together, the three cases point to a clearer path forward for publishers building copyright cases against Big AI:

Focus on outputs instead of inputs: It’s not enough that someone hoovered up your work. To build a solid case, you need to show that what the AI company did with it reproduced it in some form. So far, no court has definitively decided whether AI outputs are meaningfully different enough to count as “transformative” in the eyes of copyright law, but it should be noted that courts have ruled in the past that copyright violation can occur even when small parts of the work are copied—ifthose parts represent the “heart” of the original.

Show market harm: This looks increasingly like the main battle. Now that we have a lot of data on how AI search engines and chatbots—which, to be clear, are outputs—are affecting the online behavior of news consumers, the case that an AI service harms the media market is easier to make than it was a year ago. In addition, the emergence of licensing deals between publishers and AI companies is evidence that there’s market harm by creating outputs without offering such a deal.

Question source legitimacy: Was the content legally acquired or pirated? The Anthropic case opens this up as a possible attack vector for publishers. If they can prove scraping occurred through paywalls—without subscribing first—that could be a violation even absent any outputs."

Wednesday, June 25, 2025

Ball State University Libraries Launches Research Guide on Ethical AI Use; Ball State University, June 24, 2025

 Ball State University; Ball State University Libraries Launches Research Guide on Ethical AI Use

"In an era in which artificial intelligence tools are rapidly reshaping how we access and share information, Ball State University Libraries has introduced a new research guide to help students, faculty, staff, and community members use AI more thoughtfully and effectively.

The interactive guide, now available at bsu.libguides.com, equips users with foundational skills to assess the credibility, accuracy, and ethical implications of generative AI tools like ChatGPT and image generators. Through five short videos and practical examples, the guide teaches users to identify potential misinformation, recognize AI-generated bias, and apply AI output in meaningful and responsible ways.

Key learning outcomes include:"

Tuesday, June 24, 2025

Anthropic’s AI copyright ‘win’ is more complicated than it looks; Fast Company, June 24, 2025

CHRIS STOKEL-WALKER, Fast Company;Anthropic’s AI copyright ‘win’ is more complicated than it looks

"And that’s the catch: This wasn’t an unvarnished win for Anthropic. Like other tech companies, Anthropic allegedly sourced training materials from piracy sites for ease—a fact that clearly troubled the court. “This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,” Alsup wrote, referring to Anthropic’s alleged pirating of more than 7 million books.

That alone could carry billions in liability, with statutory damages starting at $750 per book—a trial on that issue is still to come.

So while tech companies may still claim victory (with some justification, given the fair use precedent), the same ruling also implies that companies will need to pay substantial sums to legally obtain training materials. OpenAI, for its part, has in the past argued that licensing all the copyrighted material needed to train its models would be practically impossible.

Joanna Bryson, a professor of AI ethics at the Hertie School in Berlin, says the ruling is “absolutely not” a blanket win for tech companies. “First of all, it’s not the Supreme Court. Secondly, it’s only one jurisdiction: The U.S.,” she says. “I think they don’t entirely have purchase over this thing about whether or not it was transformative in the sense of changing Claude’s output.”"

The copyright war between the AI industry and creatives; Financial Times, June 23, 2025

 , Financial Times ; The copyright war between the AI industry and creatives

"One is that the government itself estimates that “creative industries generated £126bn in gross value added to the economy [5 per cent of GDP] and employed 2.4 million people in 2022”. It is at the very least an open question whether the value added of the AI industry will ever be of a comparable scale in this country. Another is that the creative industries represent much of the best of what the UK and indeed humanity does. The idea of handing over its output for free is abhorrent...

Interestingly, for much of the 19th century, the US did not recognise international copyright at all in its domestic law. Anthony Trollope himself complained fiercely about the theft of the copyright over his books."

Wednesday, April 9, 2025

I’m Not Convinced Ethical Generative AI Currently Exists; Wired, February 20, 2025

  , Wired; I’m Not Convinced Ethical Generative AI Currently Exists

"For me, the ethics of generative AI use can be broken down to issues with how the models are developed—specifically, how the data used to train them was accessed—as well as ongoing concerns about their environmental impact. In order to power a chatbot or image generator, an obscene amount of data is required, and the decisions developers have made in the past—and continue to make—to obtain this repository of data are questionable and shrouded in secrecy. Even what people in Silicon Valley call “open source” models hide the training datasets inside...

The ethical aspects of AI outputs will always circle back to our human inputs. What are the intentions of the user’s prompts when interacting with a chatbot? What were the biases in the training data? How did the devs teach the bot to respond to controversial queries? Rather than focusing on making the AI itself wiser, the real task at hand is cultivating more ethical development practices and user interactions."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

 John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Thursday, August 29, 2024

OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims; Bloomberg Law, August 29, 2024

 Annelise Gilbert, Bloomberg Law; OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims

"Diverting attention to hacking claims or how many tries it took to obtain exemplary outputs, however, avoids addressing most publishers’ primary allegation: AI tools illegally trained on copyrighted works."

Sunday, August 4, 2024

Music labels' AI lawsuits create copyright puzzle for courts; Reuters, August 4, 2024

, Reuters; Music labels' AI lawsuits create copyright puzzle for courts

"Suno and Udio pointed to past public statements defending their technology when asked for comment for this story. They filed their initial responses in court on Thursday, denying any copyright violations and arguing that the lawsuits were attempts to stifle smaller competitors. They compared the labels' protests to past industry concerns about synthesizers, drum machines and other innovations replacing human musicians...

The labels' claims echo allegations by novelists, news outlets, music publishers and others in high-profile copyright lawsuits over chatbots like OpenAI's ChatGPT and Anthropic's Claude that use generative AI to create text. Those lawsuits are still pending and in their early stages.

Both sets of cases pose novel questions for the courts, including whether the law should make exceptions for AI's use of copyrighted material to create something new...

"Music copyright has always been a messy universe," said Julie Albert, an intellectual property partner at law firm Baker Botts in New York who is tracking the new cases. And even without that complication, Albert said fast-evolving AI technology is creating new uncertainty at every level of copyright law.

WHOSE FAIR USE?

The intricacies of music may matter less in the end if, as many expect, the AI cases boil down to a "fair use" defense against infringement claims - another area of U.S. copyright law filled with open questions."

Thursday, August 1, 2024

How OpenAI is Looking to Beat the Growing Pool of Copyright Cases; The Fashion Law, July 23, 2024

 Aaron West, The Fashion Law; How OpenAI is Looking to Beat the Growing Pool of Copyright Cases

"The recent barrage of copyright infringement disputes that are being waged against OpenAI and Microsoft by major publishers, authors, and other plaintiffs continues to stack up, with various outcomes coming in early rounds from district courts. While most of the cases that the artificial intelligence (“AI”) company and its chief investor are facing are still in early stages, at least one major theme has started to emerge from the litigation: The two high-powered defendants are leaning on a defense that paints the plaintiffs’ claims as stemming from their manipulation of the AI-powered platforms at play – and thus, prompting allegedly unlikely and hypothetical outputs – and the harm they allege in their infringement cases as purely speculative."

Saturday, June 8, 2024

You Can Create Award-Winning Art With AI. Can You Copyright It?; Bloomberg Law, June 5, 2024

Matthew S. Schwartz, Bloomberg Law; You Can Create Award-Winning Art With AI. Can You Copyright It?

"We delved into the controversy surrounding the use of copyrighted material in training AI systems in our first two episodes of this season. Now we shift our focus to the output. Who owns artwork created using artificial intelligence? Should our legal system redefine what constitutes authorship? Or, as AI promises to redefine how we create, will the government cling to historical notions of authorship?

Guests:

  • Jason M. Allen, founder of Art Incarnate
  • Sy Damle, partner in the copyright litigation group at Latham & Watkins
  • Shira Perlmutter, Register of Copyrights and director of the US Copyright Office"

Wednesday, May 29, 2024

Will the rise of AI spell the end of intellectual property rights?; The Globe and Mail, May 27, 2024

SHEEMA KHAN , The Globe and Mail; Will the rise of AI spell the end of intellectual property rights?

"AI’s first challenge to IP is in the inputs...

Perhaps the question will become: Will IP be the death of AI?...

The second challenge relates to who owns the AI-generated products...

Yet IP rights are key to innovation, as they provide a limited monopoly to monetize investments in research and development. AI represents an existential threat in this regard.

Clearly, the law has not caught up. But sitting idly by is not an option, as there are too many important policy issues at play."