Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Sunday, December 29, 2024

AI's assault on our intellectual property must be stopped; Financial Times, December 21, 2024

Kate Mosse, Financial Times; AI's assault on our intellectual property must be stopped

"Imagine my dismay, therefore, to discover that those 15 years of dreaming, researching, planning, writing, rewriting, editing, visiting libraries and archives, translating Occitan texts, hunting down original 13th-century documents, becoming an expert in Catharsis, apparently counts for nothing. Labyrinth is just one of several of my novels that have been scraped by Meta's large language model. This has been done without my consent, without remuneration, without even notification. This is theft...

AI companies present creators as being against change. We are  not. Every artist I know is already engaging with AI in one way or another. But a distinction needs to be made between AI that can be used in brilliant ways -- for example, medical diagnosis -- and the foundations of AI models, where companies are essentially stealing creatives' work for their own profit. We should not forget that the AI companies rely on creators to build their models. Without strong copyright law that ensures creators can earn a living, AI companies will lack the high-quality material that is essential for their future growth."

Friday, December 27, 2024

Tech companies face tough AI copyright questions in 2025; Reuters, December 27, 2024

, Reuters ; Tech companies face tough AI copyright questions in 2025

"The new year may bring pivotal developments in a series of copyright lawsuits that could shape the future business of artificial intelligence.

The lawsuits from authors, news outlets, visual artists, musicians and other copyright owners accuse OpenAI, Anthropic, Meta Platforms and other technology companies of using their work to train chatbots and other AI-based content generators without permission or payment.
Courts will likely begin hearing arguments starting next year on whether the defendants' copying amounts to "fair use," which could be the AI copyright war's defining legal question."

Sunday, December 8, 2024

Google CEO: AI development is finally slowing down—‘the low-hanging fruit is gone’; CNBC, December 8, 2024

Megan Sauer , CNBC; Google CEO: AI development is finally slowing down—‘the low-hanging fruit is gone’;

"Now, with the industry’s competitive landscape somewhat established — multiple big tech companies, including Google, have competing models — it’ll take time for another technological breakthrough to shock the AI industry into hyper-speed development again, Pichai said at the New York Times’ DealBook Summit last week.

“I think the progress is going to get harder. When I look at [2025], the low-hanging fruit is gone,” said Pichai, adding: “The hill is steeper ... You’re definitely going to need deeper breakthroughs as we get to the next stage.”...

Some tech CEOs, like Microsoft’s Satya Nadella, agree with Pichai. “Seventy years of the Industrial Revolution, there wasn’t much industry growth, and then it took off ... it’s never going to be linear,” Nadella saidat the Fast Company Innovation Festival 2024 in October.

Others disagree, at least publicly. OpenAI CEO Sam Altman, for example, posted “there is no wall” on social media platform X in November — a response to reports that the recently released ChatGPT-4 was only moderately better than previous models."

Tuesday, December 3, 2024

Getty Images CEO Calls AI Training Models ‘Pure Theft’; PetaPixel, December 3, 2024

MATT GROWCOOT , PetaPixel; Getty Images CEO Calls AI Training Models ‘Pure Theft’

"The CEO of Getty Images has penned a column in which he calls the practice of scraping photos and other content from the open web by AI companies “pure theft”.

Writing for Fortune, Craig Peters argues that fair use rules must be respected and that AI training practices are in contravention of those rules...

“I am responsible for an organization that employs over 1,700 individuals and represents the work of more than 600,000 journalists and creators worldwide,” writes Peters. “Copyright is at the very core of our business and the livelihood of those we employ and represent.”"

Monday, November 4, 2024

What AI knows about you; Axios, November 4, 2024

 Ina Friend, Axios; What AI knows about you

"Most AI builders don't say where they are getting the data they use to train their bots and models — but legally they're required to say what they are doing with their customers' data.

The big picture: These data-use disclosures open a window onto the otherwise opaque world of Big Tech's AI brain-food fight.

  • In this new Axios series, we'll tell you, company by company, what all the key players are saying and doing with your personal information and content.

Why it matters: You might be just fine knowing that picture you just posted on Instagram is helping train the next generative AI art engine. But you might not — or you might just want to be choosier about what you share.

Zoom out: AI makers need an incomprehensibly gigantic amount of raw data to train their large language and image models. 

  • The industry's hunger has led to a data land grab: Companies are vying to teach their baby AIs using information sucked in from many different sources — sometimes with the owner's permission, often without it — before new laws and court rulings make that harder. 

Zoom in: Each Big Tech giant is building generative AI models, and many of them are using their customer data, in part, to train them.

  • In some cases it's opt-in, meaning your data won't be used unless you agree to it. In other cases it is opt-out, meaning your information will automatically get used unless you explicitly say no. 
  • These rules can vary by region, thanks to legal differences. For instance, Meta's Facebook and Instagram are "opt-out" — but you can only opt out if you live in Europe or Brazil.
  • In the U.S., California's data privacy law is among the laws responsible for requiring firms to say what they do with user data. In the EU, it's the GDPR."

Friday, October 18, 2024

Penguin Random House underscores copyright protection in AI rebuff; The Bookseller, October 18, 2024

 MATILDA BATTERSBY, The Bookseller; Penguin Random House underscores copyright protection in AI rebuff

"The world’s biggest trade publisher has changed the wording on its copyright pages to help protect authors’ intellectual property from being used to train large language models (LLMs) and other artificial intelligence (AI) tools, The Bookseller can exclusively reveal.

Penguin Random House (PRH) has amended its copyright wording across all imprints globally, confirming it will appear “in imprint pages across our markets”. The new wording states: “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems”, and will be included in all new titles and any backlist titles that are reprinted.

The statement also “expressly reserves [the titles] from the text and data mining exception”, in accordance with a European Parliament directive.

The move specifically to ban the use of its titles by AI firms for the development of chatbots and other digital tools comes amid a slew of copyright infringement cases in the US and reports that large tranches of pirated books have already been used by tech companies to train AI tools. In 2024, several academic publishers including Taylor & Francis, Wiley and Sage have announced partnerships to license content to AI firms.

PRH is believed to be the first of the Big Five anglophone trade publishers to amend its copyright information to reflect the acceleration of AI systems and the alleged reliance by tech companies on using published work to train language models."

Thursday, October 17, 2024

Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill?; The Guardian, October 16, 2024

 Garrison Lovely, The Guardian; Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill? 

"I’m writing a book on the economics and politics of AI and have analyzed years of nationwide polling on the topic. The findings are pretty consistent: people worry about risks from AI, favor regulations, and don’t trust companies to police themselves. Incredibly, these findings tend to hold true for both Republicans and Democrats.

So why would Newsom buck the popular bill?

Well, the bill was fiercely resisted by most of the AI industry, including GoogleMeta and OpenAI. The US has let the industry self-regulate, and these companies desperately don’t want that to change – whatever sounds their leaders make to the contrary...

The top three names on the congressional letter – Zoe Lofgren, Anna Eshoo, and Ro Khanna – have collectively taken more than $4m in political contributions from the industry, accounting for nearly half of their lifetime top-20 contributors. Google was their biggest donor by far, with nearly $1m in total.

The death knell probably came from the former House speaker Nancy Pelosi, who published her own statement against the bill, citing the congressional letter and Li’s Fortune op-ed.

In 2021, reporters discovered that Lofgren’s daughter is a lawyer for Google, which prompted a watchdog to ask Pelosi to negotiate her recusal from antitrust oversight roles.

Who came to Lofgren’s defense? Eshoo and Khanna.

Three years later, Lofgren remains in these roles, which have helped her block efforts to rein in big tech – against the will of even her Silicon Valley constituents.

Pelosi’s 2023 financial disclosure shows that her husband owned between $16m and $80m in stocks and options in Amazon, Google, Microsoft and Nvidia...

Sunny Gandhi of the youth tech advocacy group Encode Justice, which co-sponsored the bill, told me: “When you tell the average person that tech giants are creating the most powerful tools in human history but resist simple measures to prevent catastrophic harm, their reaction isn’t just disbelief – it’s outrage. This isn’t just a policy disagreement; it’s a moral chasm between Silicon Valley and Main Street.”

Newsom just told us which of these he values more."

Wednesday, October 16, 2024

His daughter was murdered. Then she reappeared as an AI chatbot.; The Washington Post, October 15, 2024

  , The Washington Post; His daughter was murdered. Then she reappeared as an AI chatbot.

"Jennifer’s name and image had been used to create a chatbot on Character.AI, a website that allows users to converse with digital personalities made using generative artificial intelligence. Several people had interacted with the digital Jennifer, which was created by a user on Character’s website, according to a screenshot of her chatbot’s now-deleted profile.

Crecente, who has spent the years since his daughter’s death running a nonprofit organization in her name to prevent teen dating violence, said he was appalled that Character had allowed a user to create a facsimile of a murdered high-schooler without her family’s permission. Experts said the incident raises concerns about the AI industry’s ability — or willingness — to shield users from the potential harms of a service that can deal in troves of sensitive personal information...

The company’s terms of service prevent users from impersonating any person or entity...

AI chatbots can engage in conversation and be programmed to adopt the personalities and biographical details of specific characters, real or imagined. They have found a growing audience online as AI companies market the digital companions as friends, mentors and romantic partners...

Rick Claypool, who researched AI chatbots for the nonprofit consumer advocacy organization Public Citizen, said while laws governing online content at large could apply to AI companies, they have largely been left to regulate themselves. Crecente isn’t the first grieving parent to have their child’s information manipulated by AI: Content creators on TikTok have used AI to imitate the voices and likenesses of missing children and produce videos of them narrating their deaths, to outrage from the children’s families, The Post reported last year.

“We desperately need for lawmakers and regulators to be paying attention to the real impacts these technologies are having on their constituents,” Claypool said. “They can’t just be listening to tech CEOs about what the policies should be … they have to pay attention to the families and individuals who have been harmed.”

Monday, September 30, 2024

OpenAI Faces Early Appeal in First AI Copyright Suit From Coders; Bloomberg Law, September 30, 2024

 Isaiah Poritz , Bloomberg Law; OpenAI Faces Early Appeal in First AI Copyright Suit From Coders

"OpenAI Inc. and Microsoft Corp.‘s GitHub will head to the country’s largest federal appeals court to resolve their first copyright lawsuit from open-source programmers who claim the companies’ AI coding tool Copilot violates a decades-old digital copyright law.

Judge Jon S. Tigar granted the programmers’ request for a mid-case turn to the US Court of Appeals for the Ninth Circuit, which must determine whether OpenAI’s copying of open-source code to train its AI model without proper attribution to the programmers could be a violation of the Digital Millennium Copyright Act...

The programmers argued that Copilot fails to include authorship and licensing terms when it outputs code. Unlike other lawsuits against AI companies, the programmers didn’t allege that OpenAI and GitHub engaged in copyright infringement, which is different from a DMCA violation."

OpenAI Faces Early Appeal in First AI Copyright Suit From Coders; Bloomberg Law, September 30, 2024

Isaiah Poritz , Bloomberg Law; OpenAI Faces Early Appeal in First AI Copyright Suit From Coders

"The programmers argued that Copilot fails to include authorship and licensing terms when it outputs code. Unlike other lawsuits against AI companies, the programmers didn’t allege that OpenAI and GitHub engaged in copyright infringement, which is different from a DMCA violation."

Thursday, August 29, 2024

OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims; Bloomberg Law, August 29, 2024

 Annelise Gilbert, Bloomberg Law; OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims

"Diverting attention to hacking claims or how many tries it took to obtain exemplary outputs, however, avoids addressing most publishers’ primary allegation: AI tools illegally trained on copyrighted works."

Wednesday, August 21, 2024

Startup using blockchain to prevent copyright theft by AI is valued over $2 billion after fresh funding; CNBC, August 21, 2024

 Ryan Browne, CNBC; Startup using blockchain to prevent copyright theft by AI is valued over $2 billion after fresh funding

"San-Francisco-based startup Story said Wednesday that it raised $80 million of funding for a blockchain designed to prevent artificial intelligence makers like OpenAI from taking creators’ intellectual property without permission."

Tuesday, August 20, 2024

Authors sue Claude AI chatbot creator Anthropic for copyright infringement; AP, August 19, 2024

MATT O’BRIEN, AP; Authors sue Claude AI chatbot creator Anthropic for copyright infringement

"A group of authors is suing artificial intelligence startup Anthropic, alleging it committed “large-scale theft” in training its popular chatbot Claude on pirated copies of copyrighted books.

While similar lawsuits have piled up for more than a year against competitor OpenAI, maker of ChatGPT, this is the first from writers to target Anthropic and its Claude chatbot.

The smaller San Francisco-based company — founded by ex-OpenAI leaders — has marketed itself as the more responsible and safety-focused developer of generative AI models that can compose emails, summarize documents and interact with people in a natural way...

The lawsuit was brought by a trio of writers — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — who are seeking to represent a class of similarly situated authors of fiction and nonfiction...

What links all the cases is the claim that tech companies ingested huge troves of human writings to train AI chatbots to produce human-like passages of text, without getting permission or compensating the people who wrote the original works. The legal challenges are coming not just from writers but visual artistsmusic labels and other creators who allege that generative AI profits have been built on misappropriation...

But the lawsuit against Anthropic accuses it of using a dataset called The Pile that included a trove of pirated books. It also disputes the idea that AI systems are learning the way humans do."

Sunday, August 18, 2024

UC Berkeley Law School To Offer Advanced Law Degree Focused On AI; Forbes, August 16, 2024

 Michael T. Nietzel, Forbes; UC Berkeley Law School To Offer Advanced Law Degree Focused On AI

"The University of California, Berkeley School of Law has announced that it will offer what it’s calling “the first-ever law degree with a focus on artificial intelligence (AI).” The new AI-focused Master of Laws (LL.M.) program is scheduled to launch in summer 2025.

The program, which will award an AI Law and Regulation certificate for students enrolled in UC Berkeley Law’s LL.M. executive track, is designed for working professionals and can be completed over two summers or through remote study combined with one summer on campus...

According to Assistant Law Dean Adam Sterling, the curriculum will cover topics such as AI ethics, the fundamentals of AI technology, and current and future efforts to regulate AI. “This program will equip participants with in-depth knowledge of the ethical, regulatory, and policy challenges posed by AI,” Sterling added. “It will focus on building practice skills to help them advise and represent leading law firms, AI companies, governments, and non-profit organizations.”"