Wednesday, February 12, 2025

Court: Training AI Model Based on Copyrighted Data Is Not Fair Use as a Matter of Law; The National Law Review, February 11, 2025

Joseph A. MeckesJoseph Grasser of Squire Patton Boggs (US) LLP   - Global IP and Technology Law Blog,  The National Law Review; Court: Training AI Model Based on Copyrighted Data Is Not Fair Use as a Matter of Law

"In what may turn out to be an influential decision, Judge Stephanos Bibas ruled as a matter of law in Thompson Reuters v. Ross Intelligence that creating short summaries of law to train Ross Intelligence’s artificial intelligence legal research application not only infringes Thompson Reuters’ copyrights as a matter of law but that the copying is not fair use. Judge Bibas had previously ruled that infringement and fair use were issues for the jury but changed his mind: “A smart man knows when he is right; a wise man knows when he is wrong.”

At issue in the case was whether Ross Intelligence directly infringed Thompson Reuters’ copyrights in its case law headnotes that are organized by Westlaw’s proprietary Key Number system. Thompson Reuters contended that Ross Intelligence’s contractor copied those headnotes to create “Bulk Memos.” Ross Intelligence used the Bulk Memos to train its competitive AI-powered legal research tool. Judge Bibas ruled that (i) the West headnotes were sufficiently original and creative to be copyrightable, and (ii) some of the Bulk Memos used by Ross were so similar that they infringed as a matter of law...

In other words, even if a work is selected entirely from the public domain, the simple act of selection is enough to give rise to copyright protection."

U.S. Copyright Office Releases Publication Produced by Group of Economic Scholars Identifying the Economic Implications of Artificial Intelligence for Copyright Policy; U.S. Copyright Office, February 12, 2025

 U.S. Copyright Office, Issue No. 1062; U.S. Copyright Office Releases Publication Produced by Group of Economic Scholars Identifying the Economic Implications of Artificial Intelligence for Copyright Policy

"Today, the U.S. Copyright Office is releasing Identifying the Economic Implications of Artificial Intelligence for Copyright Policy: Context and Direction for Economic Research. The publication, produced by a group of economic scholars, discusses the economic issues at the intersection of artificial intelligence (AI) and copyright policy. The group engaged in several months of substantive discussions, consultation with technical experts, and research, culminating in a daylong roundtable event. Participants spent the subsequent months articulating and refining the roundtable discussions, resulting in today’s publication. The group’s goal was identifying the most consequential economic characteristics of AI and copyright and what factors may inform policy decisions. 

"Development of AI technology has meaningful implications for the economic frameworks of copyright policy, and economists have only just begun to explore those," said Copyright Office Chief Economist Brent Lutes. "The Office convened an economic roundtable on AI and copyright policy with experts to help expediate research and coordinate the research community. The goal of this group’s work is to provide the broader economic research community a structured and rigorous framework for considering economic evidence." 

This publication serves as a platform for articulating the ideas expressed by participants as part of the roundtable. All principal contributors submitted written materials summarizing the group’s prior discussions on a particular topic, with editorial support provided by the Office of the Chief Economist. The many ideas and views discussed in this project do not necessarily represent the views of every roundtable participant or their respective institutions. The U.S. Copyright Office does not take a position on these ideas for the purposes of this project."

Monday, February 10, 2025

UNESCO Holds Workshop on AI Ethics in Cuba; UNESCO, February 7, 2025

 UNESCO; UNESCO Holds Workshop on AI Ethics in Cuba

"During the joint UNESCO-MINCOM National Workshop "Ethics of Artificial Intelligence: Equity, Rights, Inclusion" in Havana, the results of the application of the Readiness Assessment Methodology (RAM) for the ethical development of AI in Cuba were presented.

Similarly, there was a discussion on the Ethical Impact Assessment (EIA), a tool aimed at ensuring that AI systems follow ethical rules and are transparent...

The meeting began with a video message from the Assistant Director-General for Social and Human Sciences, Gabriela Ramos, who emphasized that artificial intelligence already has a significant impact on many aspects of our lives, reshaping the way we work, learn, and organize society.

Technologies can bring us greater productivity, help deliver public services more efficiently, empower society, and drive economic growth, but they also risk perpetuating global inequalities, destabilizing societies, and endangering human rights if they are not safe, representative, and fair, and above all, if they are not accessible to everyone.

Gabriela RamosAssistant Director-General for Social and Human Sciences"

Meta staff torrented nearly 82TB of pirated books for AI training — court records reveal copyright violations; Tom's Hardware, February 9, 2025

 , Tom's Hardware; Meta staff torrented nearly 82TB of pirated books for AI training — court records reveal copyright violations

"Facebook parent-company Meta is currently fighting a class action lawsuit alleging copyright infringement and unfair competition, among others, with regards to how it trained LLaMA. According to an X (formerly Twitter) post by vx-underground, court records reveal that the social media company used pirated torrents to download 81.7TB of data from shadow libraries including Anna’s Archive, Z-Library, and LibGen. It then used this information to train its AI models.

The evidence, in the form of written communication, shows the researchers’ concerns about Meta’s use of pirated materials. One senior AI researcher said way back in October 2022, “I don’t think we should use pirated material. I really need to draw a line here.” While another one said, “Using pirated material should be beyond our ethical threshold,” then they added, “SciHub, ResearchGate, LibGen are basically like PirateBay or something like that, they are distributing content that is protected by copyright and they’re infringing it.”"

Saturday, February 8, 2025

OpenAI says DeepSeek ‘inappropriately’ copied ChatGPT – but it’s facing copyright claims too; The Conversation, February 4, 2025

Senior Lecturer in Natural Language Processing, The University of Melbourne, The University of Melbourne , Lecturer in Cybersecurity, The University of Melbourne, The Conversation; OpenAI says DeepSeek ‘inappropriately’ copied ChatGPT – but it’s facing copyright claims too

"Within days, DeepSeek’s app surpassed ChatGPT in new downloads and set stock prices of tech companies in the United States tumbling. It also led OpenAI to claim that its Chinese rival had effectively pilfered some of the crown jewels from OpenAI’s models to build its own. 

In a statement to the New York Times, the company said: 

We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models, and will share information as we know more. We take aggressive, proactive countermeasures to protect our technology and will continue working closely with the US government to protect the most capable models being built here.

The Conversation approached DeepSeek for comment, but it did not respond.

But even if DeepSeek copied – or, in scientific parlance, “distilled” – at least some of ChatGPT to build R1, it’s worth remembering that OpenAI also stands accused of disrespecting intellectual property while developing its models."

The public domain keeps culture vibrant. Why is it shrinking?; Vox, February 7, 2025

Jorge Just , Vox; The public domain keeps culture vibrant. Why is it shrinking?

"Copyrights keep getting longer. What does that mean for art?"

Wednesday, February 5, 2025

Google lifts its ban on using AI for weapons; BBC, February 5, 2025

 Lucy Hooker & Chris Vallance, BBC; Google lifts its ban on using AI for weapons

"Google's parent company has ditched a longstanding principle and lifted a ban on artificial intelligence (AI) being used for developing weapons and surveillance tools.

Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm".

In a blog post Google defended the change, arguing that businesses and democratic governments needed to work together on AI that "supports national security".

Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems."

Tuesday, February 4, 2025

Trump Return-to-Office Order Faces Big Exemption at Patent Office; Bloomberg Law, January 27, 2025

 , Bloomberg Law; Trump Return-to-Office Order Faces Big Exemption at Patent Office

"The US Patent and Trademark Office told employees Friday the agency will fully comply with President Donald Trump’s return-to-office order, but the directive won’t override collective bargaining agreements covering most of its employees."

The US Copyright Office's new ruling on AI art is here - and it could change everything; ZDNet, February 3, 2025

David Gewirtz, Senior Contributing Editor, ZDNet; The US Copyright Office's new ruling on AI art is here - and it could change everything

"Last week, the US Copyright Office released its detailed report and comprehensive guidelines on the issue of copyright protection and AI-generated work.

For a government legal document, it is a fascinating exploration of the intersection of artificial intelligence and the very concept of authorship and creativity. The study's authors conduct a deep dive, taking in comments from the general public and experts alike, and producing an analysis of what it means to creatively author a work.

They then explore the issue of whether an AI-generated work versus an AI-assisted work is subject to copyright protection, and what that means not only for individual authors but also for the encouragement of creativity and innovation in society as a whole.

This is the second of what will be a three-part report from the Copyright Office. Part 1, published last year, explored digital replicas, using digital technology to "realistically replicate" someone's voice or appearance.

Part 3 is expected to be released later this year. It will focus on the issues of training AIs using copyrighted works, aspects of licensing, and how liability might be allocated in cases where a spectacular AI failure can be attributed to training (which sometimes results in litigation)."

Proud Boys Lose Control of Their Name to a Black Church They Vandalized; The New York Times, February 3, 2025

, The New York Times; Proud Boys Lose Control of Their Name to a Black Church They Vandalized

"The Proud Boys no longer have control over their own name.

Under a ruling by a Washington judge on Monday, the infamous far-right group was stripped of control over the trademark “Proud Boys” and was barred from selling any merchandise with either its name or its symbols without the consent of a Black church in Washington that its members vandalized. In June 2023, the church won a $2.8 million default judgment against the Proud Boys after the organization’s former leader, Enrique Tarrio, and several of his subordinates attacked it in a night of violence after a pro-Trump rally in December 2020.

The ruling by the judge, Tanya M. Jones Bosier of the Superior Court of the District of Columbia, effectively means that Proud Boys chapters across the country can no longer legally use their own name or the group’s traditional symbols without the permission of the church that was attacked, the Metropolitan African Methodist Episcopal Church.

The ruling also clears the way for the church to try to seize any money that the Proud Boys might make by selling merchandise like hats or T-shirts emblazoned with their name or with any of their familiar logos, including a black and yellow laurel wreath."

Monday, February 3, 2025

DeepSeek has ripped away AI’s veil of mystique. That’s the real reason the tech bros fear it; The Observer via The Guardian, February 2, 2025

 , The Observer via The Guardian ; DeepSeek has ripped away AI’s veil of mystique. That’s the real reason the tech bros fear it

"DeepSeek, sponsored by a Chinese hedge fund, is a notable achievement. Technically, though, it is no advance on large language models (LLMs) that already exist. It is neither faster nor “cleverer” than OpenAI’s ChatGPT or Anthropic’s Claude and just as prone to “hallucinations” – the tendency, exhibited by all LLMs, to give false answers or to make up “facts” to fill gaps in its data. According to NewsGuard, a rating system for news and information websites, DeepSeek’s chatbot made false claims 30% of the time and gave no answers to 53% of questions, compared with 40% and 22% respectively for the 10 leading chatbots in NewsGuard’s most recent audit.

The figures expose the profound unreliability of all LLMs. DeepSeek’s particularly high non-response rate is likely to be the product of its censoriousness; it refuses to provide answers on any issue that China finds sensitive or about which it wants facts restricted, whether Tiananmen Square or Taiwan...

Nevertheless, for all the pushback, each time one fantasy prediction fails to materialise, another takes its place. Such claims derive less from technological possibilities than from political and economic needs. While AI technology has provided hugely important tools, capable of surpassing humans in specific fields, from the solving of mathematical problems to the recognition of disease patterns, the business model depends on hype. It is the hype that drives the billion-dollar investment and buys political influence, including a seat at the presidential inauguration."

Sunday, February 2, 2025

Copyright Office suggests AI copyright debate was settled in 1965; Ars Technica, January 30, 2025

ASHLEY BELANGER , Ars Technica; Copyright Office suggests AI copyright debate was settled in 1965

"For stakeholders who have been awaiting this guidance for months, the Copyright Office report may not change the law, but it offers some clarity.

For some artists who hoped to push the Copyright Office to adapt laws, the guidelines may disappoint, leaving many questions about a world of possible creative AI uses unanswered. But while a case-by-case approach may leave some artists unsure about which parts of their works are copyrightable, seemingly common cases are being resolved more readily. According to the Copyright Office, after each decision, it gets easier to register AI works that meet similar standards for copyrightability. Perhaps over time, artists will grow more secure in how they use AI and whether it will impact their exclusive rights to distribute works.

That's likely cold comfort for the artist advocating for prompting alone to constitute authorship. One AI artist told Ars in October that being denied a copyright has meant suffering being mocked and watching his award-winning work freely used anywhere online without his permission and without payment. But in the end, the Copyright Office was apparently more sympathetic to other commenters who warned that humanity's progress in the arts could be hampered if a flood of easily generated, copyrightable AI works drowned too many humans out of the market...

Although the Copyright Office suggested that this week's report might be the most highly anticipated, Jernite said that Hugging Face is eager to see the next report, which officials said would focus on "the legal implications of training AI models on copyrighted works, including licensing considerations and the allocation of any potential liability.""

Thursday, January 30, 2025

AI-assisted works can get copyright with enough human creativity, says US copyright office; AP, January 29, 2025

 MATT O’BRIEN, AP; AI-assisted works can get copyright with enough human creativity, says US copyright office

"Artists can copyright works they made with the help of artificial intelligence, according to a new report by the U.S. Copyright Office that could further clear the way for the use of AI tools in Hollywood, the music industry and other creative fields.

The nation’s copyright office, which sits in the Library of Congress and is not part of the executive branch, receives about half a million copyright applications per year covering millions of individual works. It has increasingly been asked to register works that are AI-generated.

And while many of those decisions are made on a case-by-case basis, the report issued Wednesday clarifies the office’s approach as one based on what the top U.S. copyright official describes as the “centrality of human creativity” in authoring a work that warrants copyright protections."

Utah Hockey Club scraps permanent Yeti nickname due to copyright dispute with cooler company; Yahoo Sports, January 29, 2025

 Jack Baer, Yahoo Sports; Utah Hockey Club scraps permanent Yeti nickname due to copyright dispute with cooler company

"The Utah Hockey Club is moving on from a nickname many believed would eventually become its permanent moniker, the Utah Yeti. The primary reason why: a cooler brand.

Mike Maughan, an executive of the team's parent company Smith Entertainment Group, confirmed to reporters Wednesday, per ESPN, that the team would pivot to a different selection of possible names after the U.S. Patent and Trademark Office rejected the Yeti name and logo it submitted for approval.

The USPTO specifically cited a "likelihood of confusion" for consumers due to other companies and brands with a similar name, most notably the Yeti Cooler Company. Different companies can feature a similar name if they're in clearly different fields (e.g. the Beatles' Apple Corps and Apple Computers), but it seems likely an NHL team with the same name would clash with certain products, such as the cooler company's apparel line."

Vatican says AI has 'shadow of evil,' calls for close oversight; Reuters, January 28, 2025

 , Reuters ; Vatican says AI has 'shadow of evil,' calls for close oversight

"The Vatican on Tuesday called for governments to keep a close eye on the development of artificial intelligence, warning the technology contained "the shadow of evil" in its ability to spread misinformation.

"AI generated fake media can gradually undermine the foundations of society," said a new text on the ethics of AI, written by two Vatican departments and approved by Pope Francis.

"This issue requires careful regulation, as misinformation—especially through AI-controlled or influenced media—can spread unintentionally, fuelling political polarization and social unrest," it said."

Wednesday, January 29, 2025

Copyright Office Releases Part 2 of Artificial Intelligence Report; U.S. Copyright Office, Issue No. 1060, January 29, 2025

 U.S. Copyright Office, Issue No. 1060Copyright Office Releases Part 2 of Artificial Intelligence Report

"Today, the U.S. Copyright Office is releasing Part 2 of its Report on the legal and policy issues related to copyright and artificial intelligence (AI). This Part of the Report addresses the copyrightability of outputs created using generative AI. The Office affirms that existing principles of copyright law are flexible enough to apply to this new technology, as they have applied to technological innovations in the past. It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements. This can include situations where a human-authored work is perceptible in an AI output, or a human makes creative arrangements or modifications of the output, but not the mere provision of prompts. The Office confirms that the use of AI to assist in the process of creation or the inclusion of AI-generated material in a larger human-generated work does not bar copyrightability. It also finds that the case has not been made for changes to existing law to provide additional protection for AI-generated outputs.

“After considering the extensive public comments and the current state of technological development, our conclusions turn on the centrality of human creativity to copyright,” said Shira Perlmutter, Register of Copyrights and Director of the U.S. Copyright Office. “Where that creativity is expressed through the use of AI systems, it continues to enjoy protection. Extending protection to material whose expressive elements are determined by a machine, however, would undermine rather than further the constitutional goals of copyright.”

In early 2023, the Copyright Office announced a broad initiative to explore the intersection of copyright and AI. Since then, the Office has issued registration guidance for works incorporating AI-generated content, hosted public listening sessions and webinars, met with experts and stakeholders, published a notice of inquiry seeking input from the public, and reviewed more than 10,000 responsive comments, which served to inform these conclusions.

The Report is being released in three Parts. Part 1 was published on July 31, 2024, and recommended federal legislation to respond to the unauthorized distribution of digital replicas that realistically but falsely depict an individual. The final, forthcoming Part 3 will address the legal implications of training AI models on copyrighted works, including licensing considerations and the allocation of any potential liability.

As announced last year, the Office also plans to supplement its March 2023 registration guidance and update the relevant sections of the Compendium of U.S. Copyright Office Practices.

For more information about the Copyright Office’s AI Initiative, please visit the website."

Tuesday, January 28, 2025

It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy; Electronic Frontier Foundation (EFF), January 27, 2025

KATHARINE TRENDACOSTA, Electronic Frontier Foundation (EFF); It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy

"We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation 

We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are: 

  • Monday: Copyright Policy Should Be Made in the Open With Input From Everyone: Copyright is not a niche concern. It affects everyone’s experience online, therefore laws and policy should be made in the open and with users’ concerns represented and taken into account. 
  • Tuesday: Copyright Enforcement as a Tool of Censorship: Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.  
  • Wednesday: Device and Digital Ownership: As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.  
  • Thursday: The Preservation and Sharing of Information and Culture:Copyright often blocks the preservation and sharing of information and culture, traditionally in the public interest. Copyright law and policy should encourage and not discourage the saving and sharing of information. 
  • Friday: Free Expression and Fair Use: Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.  

Every day this week, we’ll be sharing links to blog posts on these topics at https://www.eff.org/copyrightweek." 

Elton John backs Paul McCartney in criticising proposed overhaul to UK copyright system; The Guardian, January 27, 2025

, The Guardian ; Elton John backs Paul McCartney in criticising proposed overhaul to UK copyright system

"Elton John has backed Paul McCartney in criticising a proposed overhaul of the UK copyright system, and has called for new rules to prevent tech companies from riding “roughshod over the traditional copyright laws that protect artists’ livelihoods”.

John has backed proposed amendments to the data (use and access) bill that would extend existing copyright protections, when it goes before a vote in the House of Lords on Tuesday.

The government is also consulting on an overhaul of copyright laws that would result in artists having to opt out of letting AI companies train their models using their work, rather than an opt-in model...

John told the Sunday Times that he felt “wheels are in motion to allow AI companies to ride roughshod over the traditional copyright laws that protect artists’ livelihoods. This will allow global big tech companies to gain free and easy access to artists’ work in order to train their artificial intelligence and create competing music. This will dilute and threaten young artists’ earnings even further. The musician community rejects it wholeheartedly.”

He said that “challenging financial situations” and increased touring costs made it “harder than ever for new and emerging musicians to make the finances of the industry stack up to sustain a fledgling career”, and added that the UK’s place on the world stage as “a leader in arts and popular culture is under serious jeopardy” without robust copyright protection.

“It is the absolute bedrock of artistic prosperity, and the country’s future success in the creative industries depends on it.”

The government consultation runs until 25 February and will explore how to improve trust between the creative and AI sectors, and how creators can license and get paid for use of their material."

Monday, January 27, 2025

Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines; WVU Today, January 22, 2025

 WVU Today; Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines

"Two West Virginia University researchers have designed a curriculum to engage liberal arts faculty in discussions on the social, ethical and technical aspects of artificial intelligence and its role in classrooms.

Through a grant from the National Endowment for the Humanities, Erin Brock Carlson, assistant professor of English, and Scott Davidson, professor of philosophy, both at the WVU Eberly College of Arts and Sciences, have designed an interdisciplinary, cross-institutional program to facilitate conversations among faculty about the benefits and drawbacks of AI, how it functions and the need for human interpretation.

The award will fund a summer workshop in which Carlson and Davidson will offer AI trainings for humanities faculty and guide them through creation and development of courses with an AI component. The researchers will then assist as faculty offer those courses to students, assess progress and help with the implementation of the projects that develop.

The researchers said they hope to challenge the notion that artificial intelligence research falls into the domain of STEM fields. 

“The humanities gets overlooked and underappreciated so often,” Carlson said. “We are doing important, meaningful research, just like our colleagues in STEM and other fields. This is a chance to use a humanities lens to examine contemporary problems and developments like artificial intelligence and also to get conversations going between fields that oftentimes don’t talk to one another as much as we should.”

Co-directors Carlson and Davidson will be joined by a team of mentors and fellows — two from data science fields and two from the humanities perspective — that will serve and assist as resources in the interdisciplinary conversations. The seminar and summer workshops will support the creation or redesign of 10 courses. They plan to invite off-campus experts to help facilitate the workshops, work with the faculty and support their projects.

“It’s really about expanding capacity at the University and in the humanities to investigate the implications of AI or to actually use AI in humanities courses, whether it’s for writing, creating art or creating projects through the use of AI,” Davidson said. “There are a lot of different possibilities and directions that we hope these courses take. If we have 10 of them, it’s really going to have a big impact on humanities education here at the University.”

Carlson and Davidson acknowledge that attitudes about AI tend to be either extremely optimistic or extremely skeptical but that the reality is somewhere in the middle.

“AI is such a simplistic term to describe a whole suite of different technologies and developments that folks are dealing with every day, whether they know it or not,” Carlson said, noting that discussions could focus on personal, social and economic impacts of AI use, as well as how it affects character and intellectual values. 

Davidson was inspired to focus on AI when he found an erroneous, AI-generated summary of one of his own articles.

“It was totally wrong,” he said. “I didn’t say those things, and it made me think about how somebody might look me up and find that summary of my article and get this false impression of me. That really highlighted that we need to build an understanding in students of the need to inquire deeper and to understand that you have to be able to evaluate AI’s accuracy and its reliability.”

Carlson and Davidson said the conversations need to consider AI’s drawbacks, as well. Using AI consumes large amounts of water and electricity resulting in greenhouse emissions. Data centers produce electronic waste that can contain mercury and lead. 

They also intend to follow legal cases and precedents surrounding the use of AI.

“That’s another aspect of AI and the ways that it represents people,” Carlson said. “Because it has a very real, material impact on people in communities. It’s not just a super computer in a room. It’s a network that has a bunch of different implications for a bunch of different people, ranging from jobs to familial relationships. That’s the value of the humanities — to ask these tough questions because it’s increasingly difficult to avoid all of it.”

Conversations, as they expand, will need to keep up with the pace of AI’s rapidly developing landscape.  

“There’s going to be a lot of people involved in this,” she said. “We put together an amazing team. We want it to be an open, honest and ethical conversation that brings in other folks and opens up further conversations across the College and the University at large.”"

Saturday, January 25, 2025

Copyright Under Siege: How Big Tech Uses AI And China To Exploit Creators; Virginie Berger, January 25, 2025

Virginie Berger

, Forbes; Copyright Under Siege: How Big Tech Uses AI And China To Exploit Creators

"Generative AI is reshaping creativity in ways that highlight a troubling paradox: while touted as a force for innovation, it increasingly relies on exploiting copyrighted materials, songs, books, and artworks, without consent or compensation. This transformation underscores the growing conflict between technological progress and the preservation of artistic integrity. At the heart of the issue lies a troubling paradox: while companies like OpenAI and Google promote AI as a force for innovation, their reliance on scraping copyrighted materials, songs, books, and artworks, undermines the very creativity they claim to enhance. This exploitation is often disguised as progress or justified as necessary for global competitiveness, particularly in the AI race against China. However, these claims mask a deeper reality: the consolidation of power by Big Tech at the expense of creators. As the balance of influence shifts, those who drive culture and innovation are increasingly marginalized, raising urgent questions about the future of intellectual property and creative industries."