Showing posts with label generative AI tools. Show all posts
Showing posts with label generative AI tools. Show all posts

Saturday, August 3, 2024

AI is complicating plagiarism. How should scientists respond?; Nature, July 30, 2024

 Diana Kwon , Nature; AI is complicating plagiarism. How should scientists respond?

"From accusations that led Harvard University’s president to resign in January, to revelations in February of plagiarized text in peer-review reports, the academic world has been roiled by cases of plagiarism this year.

But a bigger problem looms in scholarly writing. The rapid uptake of generative artificial intelligence (AI) tools — which create text in response to prompts — has raised questions about whether this constitutes plagiarism and under what circumstances it should be allowed. “There’s a whole spectrum of AI use, from completely human-written to completely AI-written — and in the middle, there’s this vast wasteland of confusion,” says Jonathan Bailey, a copyright and plagiarism consultant based in New Orleans, Louisiana.

Generative AI tools such as ChatGPT, which are based on algorithms known as large language models (LLMs), can save time, improve clarity and reduce language barriers. Many researchers now argue that they are permissible in some circumstances and that their use should be fully disclosed.

But such tools complicate an already fraught debate around the improper use of others’ work. LLMs are trained to generate text by digesting vast amounts of previously published writing. As a result, their use could result in something akin to plagiarism — if a researcher passes off the work of a machine as their own, for instance, or if a machine generates text that is very close to a person’s work without attributing the source. The tools can also be used to disguise deliberately plagiarized text, and any use of them is hard to spot. “Defining what we actually mean by academic dishonesty or plagiarism, and where the boundaries are, is going to be very, very difficult,” says Pete Cotton, an ecologist at the University of Plymouth, UK."

Monday, July 3, 2023

Bestselling authors Mona Awad and Paul Tremblay sue OpenAI over copyright infringement; The Los Angeles Times, July 1, 2023

EMILY ST. MARTIN, The Los Angeles Times; Bestselling authors Mona Awad and Paul Tremblay sue OpenAI over copyright infringement

"Two bestselling novelists filed a suit against OpenAI in a San Francisco federal court on Wednesday, claiming in a proposed class action that the company used copyright-protected intellectual property to “train” its artificial intelligence chatbot.

Authors Mona Awad and Paul Tremblay claim that ChatGPT was trained in part by “ingesting” their novels without their consent."

Friday, June 30, 2023

Copyright Office: Sorry, but you probably can’t protect your AI-generated art; Fast Company, June 30, 2023

JESUS DIAZ, Fast Company; Copyright Office: Sorry, but you probably can’t protect your AI-generated art

"Well, there’s nothing to see here, folks. You don’t need any of the generative AI tools in our weekly roundup because they will produce stuff you don’t really own. At least that’s what the United States Copyright Office (USCO) says. The federal agency doubled down on its AI doctrine during a recent webinar, labeling anything produced by AI as “unclaimable material.”

In other words, anything that comes out of an AI program can’t be protected under copyright law and will not be accepted even if it’s included in a work created by a human. So those extra trees and mountains you added to your landscape photo with Photoshop Firefly beta? They are not yours, sorry.”

Robert Kasunic of the USCO says, “The Office will refuse to register works entirely generated by AI. Human authorship is a precondition to copyrightability.” But it’s more complicated than that. As Petapixel reports, USCO will register your images if they are modified with AI, but you will have to declare which parts are made using AI, making them “unclaimable, essentially discounting them” from the copyright protection. Kasunic went on to say that USCO believes that using any AI to generate content is akin to giving instructions to a commissioned artist.

How will USCO enforce this policy in a world where generative AI work is practically undetectable? It’s a question that only has one obvious answer: LOL."

Tuesday, June 27, 2023

ChatGPT and Generative AI Are Hits! Can Copyright Law Stop Them?; Bloomberg Law, June 26, 2023

Kirby Ferguson, Bloomberg Law; ChatGPT and Generative AI Are Hits! Can Copyright Law Stop Them?

"Getty Images, a top supplier of visual content for license, has sued two of the leading companies offering generative AI tools. Will intellectual property laws spell doom for the burgeoning generative AI business? We explore the brewing battle over copyright and AI in this video. 

Video features: 

Friday, April 28, 2023

EU proposes new copyright rules for generative AI; Reuters, April 28, 2023

 

 and 
Reuters; EU proposes new copyright rules for generative AI

"Companies deploying generative AI tools, such as ChatGPT, will have to disclose any copyrighted material used to develop their systems, according to an early EU agreement that could pave the way for the world's first comprehensive laws governing the technology."

Monday, April 10, 2023

Generative AI Has an Intellectual Property Problem; Harvard Business Review, April 7, 2023

Gil Appel, Juliana Neelbauer, and David A. SchweidelHarvard Business Review; Generative AI Has an Intellectual Property Problem

"This isn’t the first time technology and copyright law have crashed into each other. Google successfully defended itself against a lawsuit by arguing that transformative use allowed for the scraping of text from books to create its search engine, and for the time being, this decision remains precedential.

But there are other, non-technological cases that could shape how the products of generative AI are treated. A case before the U.S. Supreme Court against the Andy Warhol Foundation — brought by photographer Lynn Goldsmith, who had licensed an image of the late musician, Prince— could refine U.S. copyright law on the issue of when a piece of art is sufficiently different from its source material to become unequivocally “transformative,” and whether a court can consider the meaning of the derivative work when it evaluates that transformation. If the court finds that the Warhol piece is not a fair use, it could mean trouble for AI-generated works.

All this uncertainty presents a slew of challenges for companies that use generative AI. There are risks regarding infringement — direct or unintentional — in contracts that are silent on generative AI usage by their vendors and customers. If a business user is aware that training data might include unlicensed works or that an AI can generate unauthorized derivative works not covered by fair use, a business could be on the hook for willful infringement, which can include damages up to $150,000 for each instance of knowing use. There’s also the risk of accidentally sharing confidential trade secrets or business information by inputting data into generative AI tools."