Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Tuesday, June 10, 2025

Global AI: Compression, Complexity, and the Call for Rigorous Oversight; ABA SciTech Lawyer, May 9, 2025

Joan Rose Marie Bullock, ABA SciTech Lawyer; Global AI: Compression, Complexity, and the Call for Rigorous Oversight

"Equally critical is resisting haste. The push to deploy AI, whether in threat detection or data processing, often outpaces scrutiny. Rushed implementations, like untested algorithms in critical systems, can backfire, as any cybersecurity professional can attest from post-incident analyses. The maxim of “measure twice, cut once” applies here: thorough vetting trumps speed. Lawyers, trained in precedent, recognize the cost of acting without foresight; technologists, steeped in iterative testing, understand the value of validation. Prioritizing diligence over being first mitigates catastrophic failures of privacy breaches or security lapses that ripple worldwide."

Tuesday, June 3, 2025

Artificial Intelligence—Promises and Perils for Humans’ Rights; Harvard Law School Human Rights Program, June 10, 2025 10:30 AM EDT

 Harvard Law School Human Rights Program; Artificial Intelligence—Promises and Perils for Humans’ Rights

"In recent years, rapid advances in Artificial Intelligence (AI) technology, significantly accelerated by the development and deployment of deep learning and Large Language Models, have taken center stage in policy discussions and public consciousness. Amidst a public both intrigued and apprehensive about AI’s transformative potential across workplaces, families, and even broader political, economic, and geopolitical structures, a crucial conversation is emerging around its ethical, legal, and policy dimensions.

This webinar will convene a panel of prominent experts from diverse fields to delve into the critical implications of AI for humans and their rights. The discussion will broadly address the anticipated human rights harms stemming from AI’s increasing integration into society and explore potential responses to these challenges. A key focus will be on the role of international law and human rights law in addressing these harms, considering whether this legal framework can offer the appropriate tools for effective intervention."

Japan aims to lift intellectual property competitiveness via AI use; The Mainichi, June 3, 2025

 The Mainichi; Japan aims to lift intellectual property competitiveness via AI use

"The Japanese government said Tuesday it will seek to enhance the country's competitiveness in the area of intellectual property by promoting the use of artificial intelligence and attracting foreign talent.

In the intellectual property strategy for 2025, the country will take advantage of the international popularity of Japanese anime and the content of such movies that highlights local culture to help promote regional economies, expecting a total economic impact of around 1 trillion yen ($7.0 billion)."

Tuesday, May 27, 2025

WATCH: Is A.I. the new colonialism?; The Ink, May 27, 2025

 ANAND GIRIDHARADAS AND KAREN HAO, The Ink; WATCH: Is A.I. the new colonialism?

"We just got off a call with the technology journalist Karen Hao, the keenest chronicler of the technology that’s promising — or threatening — to reshape the world, who has a new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.

The book talks not just about artificial intelligence and what it might be, or its most visible spokesperson and what he might believe, but also about the way the tech industry titans resemble more and more the empires of old in their relentless resource extraction and exploitation of labor around the world, their take-no-prisoners competitiveness against supposedly “evil” pretenders, and their religious fervor for progress and even salvation. She also told us about what the future might look like if we get A.I. right, and the people who produce the data, the resources, and control the labor power can reassert their ownership and push back against these new empires to build a more humane and human future."

Wednesday, May 21, 2025

We're All Copyright Owners. Why You Need to Care About AI and Copyright; CNET, May 19, 2025

 Katelyn Chedraoui , CNET; We're All Copyright Owners. Why You Need to Care About AI and Copyright

"Most of us don't think about copyright very often in our daily lives. But in the age of generative AI, it has quickly become one of the most important issues in the development and outputs of chatbots and image and video generators. It's something that affects all of us because we're all copyright owners and authors...

What does all of this mean for the future?

Copyright owners are in a bit of a holding pattern for now. But beyond the legal and ethical implications, copyright in the age of AI raises important questions about the value of creative work, the cost of innovation and the ways in which we need or ought to have government intervention and protections. 

There are two distinct ways to view the US's intellectual property laws, Mammen said. The first is that these laws were enacted to encourage and reward human flourishing. The other is more economically focused; the things that we're creating have value, and we want our economy to be able to recognize that value accordingly."

Sunday, May 18, 2025

RIP American innovation; The Washington Post, May 12, 2025

 , The Washington Post; RIP American innovation

"That U.S. businesses have led the recent revolution in artificial intelligence is owed to the decades of research supported by the U.S. government in computing, neuroscience, autonomous systems, biology and beyond that far precedes those companies’ investments. Virtually the entire U.S. biotech industry — which brought us treatments for diabetes, breast cancer and HIV — has its roots in publicly funded research. Even a small boost to NIH funding has been shown to increase overall patents for biotech and pharmaceutical companies...

Giving out grants for what might look frivolous or wasteful on the surface is a feature, not a bug, of publicly funded research. Consider that Agriculture Department and NIH grants to study chemicals in wild yamsled to cortisone and medical steroids becoming widely affordable. Or that knowing more about the fruit fly has aided discoveries related to human aging, Parkinson’s disease and cancer.

For obvious reasons, companies don’t tend to invest in shared scientific knowledge that then allows lots of innovation to flourish. That would mean spending money on something that does not reap quick rewards just for that particular company.

Current business trends are more likely to help kill the U.S. innovation engine. A growing share of the country’s research and development is now being carried out by big, old companies, as opposed to start-ups and universities — and, in the process, the U.S. as a whole is spending more on R&D without getting commensurately more economic growth."

Friday, May 16, 2025

Democrats press Trump on Copyright Office chief’s removal; The Hill, May 14, 2025

JARED GANS, The Hill ; Democrats press Trump on Copyright Office chief’s removal

"A half dozen Senate Democrats are pressing President Trump over his firing of the head of the U.S. Copyright Office, arguing that the move is illegal. 

“It threatens the longstanding independence and integrity of the Copyright Office, which plays a vital role in our economy,” the members said in the letter. “You are acting beyond your power and contrary to the intent of Congress as you seek to erode the legal and institutional independence of offices explicitly designed to operate outside the reach of partisan influence.” ...

The head of the Copyright Office is responsible for shaping federal copyright policy, and the senators argued the role is particularly crucial as the country confronts issues concerning the intersection of copyright law and technologies like artificial intelligence."

Monday, May 12, 2025

US Copyright Office found AI companies sometimes breach copyright. Next day its boss was fired; The Register, May 12, 2025

Simon Sherwood, The Register; US Copyright Office found AI companies sometimes breach copyright. Next day its boss was fired

"The head of the US Copyright Office has reportedly been fired, the day after agency concluded that builders of AI models use of copyrighted material went beyond existing doctrines of fair use.

The office’s opinion on fair use came in a draft of the third part of its report on copyright and artificial intelligence. The first part considered digital replicas and the second tackled whether it is possible to copyright the output of generative AI.

The office published the draft [PDF] of Part 3, which addresses the use of copyrighted works in the development of generative AI systems, on May 9th.

The draft notes that generative AI systems “draw on massive troves of data, including copyrighted works” and asks: “Do any of the acts involved require the copyright owners’ consent or compensation?”"

Sunday, May 11, 2025

Trump fires Copyright Office director after report raises questions about AI training; TechCrunch, May 11, 2025

"As for how this ties into Musk (a Trump ally) and AI, Morelle linked to a pre-publication version of a U.S. Copyright Office report released this week that focuses on copyright and artificial intelligence. (In fact, it’s actually part three of a longer report.)

In it, the Copyright Office says that while it’s “not possible to prejudge” the outcome of individual cases, there are limitations on how much AI companies can count on “fair use” as a defense when they train their models on copyrighted content. For example, the report says research and analysis would probably be allowed.

“But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries,” it continues.

The Copyright Office goes on to suggest that government intervention “would be premature at this time,” but it expresses hope that “licensing markets” where AI companies pay copyright holders for access to their content “should continue to develop,” adding that “alternative approaches such as extended collective licensing should be considered to address any market failure.”

AI companies including OpenAI currently face a number of lawsuits accusing them of copyright infringement, and OpenAI has also called for the U.S. government to codify a copyright strategy that gives AI companies leeway through fair use.

Musk, meanwhile, is both a co-founder of OpenAI and of a competing startup, xAI (which is merging with the former Twitter). He recently expressed support for Square founder Jack Dorsey’s call to “delete all IP law.”"

Copyright and Artificial Intelligence Part 3: Generative AI Training, Pre-Publication; U.S. Copyright Office, May 2025

U.S. Copyright Office; Copyright and Artificial Intelligence Part 3: Generative AI Training, Pre-Publication

Monday, May 5, 2025

Copyright alone cannot protect the future of creative work; Brookings, May 1, 2025

Mark MacCarthy , Brookings; Copyright alone cannot protect the future of creative work

"AI-generated content is nowhere near as good today as the output of skilled journalists, scriptwriters, videographers, photographers, commercial designers, and other creative workers. But the AI technology is getting there. Content producers will soon be able to use AI systems to generate at least some content that used to be generated without any AI assistance. Prompt engineers will work together with traditional content creators to guide new systems of content production. The promise of the new technology is that this output will be satisfactory and maybe even superior for a wide variety of purposes at a fraction of the cost."

Tuesday, April 22, 2025

AI and the visual arts: The case for copyright protection; Brookings, April 18, 2025

 and   , Brookings; AI and the visual arts: The case for copyright protection

"Looking ahead 

As AI-generated art continues to reshape the creative landscape, the legal and economic challenges surrounding copyright, authorship, and enforcement will only grow more complex. Ongoing lawsuits, reactions from artists, and market shifts highlight the struggle to define human authorship and protect artists’ rights in an era where AI-generated works hold significant commercial value, but lack clear copyright protections. With increasing pressure on legislative and regulatory bodies to address these issues, the future of AI-generated art will depend on policies that balance innovation with fair compensation and safeguards for human creativity.  

While we await the final part of the Copyright Office’s report, which will determine the legal implications of training AI on copyrighted data, the more pressing determinant of fair use in GenAI training may come from the courts. Yet, regardless of the outcome, the Copyright Office should transcend its passive regulatory guidance and actively develop new mechanisms to distinguish human-authored elements from AI-generated ones to enforce its present guidance. In addition, the office must think creatively about flexible frameworks that can account for future, more nuanced and complex modes of collaboration between human and GenAI systems. This may require stronger disclosure requirements, improved detection methods, and a reexamination of what constitutes meaningful human authorship in an increasing AI-involved creative process.   

Further, artists, tech companies, and policymakers must be brought to the table to ensure copyright law reflects the newest collaborations in AI and art, protects human creativity, and accommodates technological progress. Without safeguards, the rapid influx of AI into the art market could lead to a systemic devaluation of human original authorship and growing precarity in the creative field. The future of AI-generated art hinges on such governance. "

Wednesday, April 16, 2025

Why Musk and Dorsey want to ‘delete all IP law’; The Washington Post, April 15, 2025

Analysis by 
 and 
with research by 
 , The Washington Post; Why Musk and Dorsey want to ‘delete all IP law’

"Jack Dorsey, the co-founder of Twitter and CEO of Square, posted a cryptic and drastic demand on Elon Musk’s X over the weekend: “delete all IP law.” The post drew a quick reply from Mr. X himself: “I agree.”

Musk’s laconic response amplified Dorsey’s post to his 220 million followers and sparked a debate that drew in a cast of characters including Epic Games CEO Tim Sweeney, tech lawyer and former vice presidential candidate Nicole Shanahan, novelist Walter Kirn, evolutionary psychologist Geoffrey Miller and the technologist and early Twitter developer Evan Henshaw-Plath, a.k.a. Rabble, among others...

Serious policy idea or not, the concord between Dorsey and Musk highlights how the debate over AI and copyright law is coming to a head in Silicon Valley.

How it’s resolved will have major ramifications for the tech companies, creative people and their livelihoods and the overall AI race."

Thursday, April 10, 2025

Entrance to [Copyright] Paradise Halted by the Human-Authorship Requirement; The National Law Review, April 9, 2025

Jonathan D. Reichman of Hunton Andrews Kurth   - Publications , The National Law Review; Entrance to [Copyright] Paradise Halted by the Human-Authorship Requirement

"In mid-March, a federal appeals court affirmed a ruling finding that artwork created solely by an artificial intelligence (AI) system is not entitled to copyright protection. Thaler v. Perlmutter, No. 23-5233 (D.C. Cir. Mar. 18, 2025). This decision aligns with the position taken by the US Copyright Office in its recent report in light of the ongoing evolution, application, and litigation surrounding AI systems. U.S. Copyright Office, Copyright and Artificial Intelligence, Part 2: Copyrightability (2025).

While this decision may appear straightforward, future developments could arise through an application to the US Supreme Court or through cases addressing the extent of human involvement necessary in AI-generated works that seek copyright protection.

Key Takeaways

  • The Copyright Act of 1976 (Act) requires all eligible works to be authored by a human being.
  • The Act’s definition of “author” does not apply to machines.
  • The work-made-for-hire doctrine requires an existing copyright interest.
  • Thaler’s representation that the work was generated autonomously by a computer system weighed heavily against his challenges to the human-authorship requirement and the work-made-for-hire doctrine.
  • The Court rejected Dr. Thaler’s arguments that (1) the term “author” is not confined to human beings; (2) the work was made for hire; and (3) the human-authorship requirement prevents protection of works made with AI.
  • The Court affirmed the denial of copyright registration where the author of the work was listed as a machine."

Wednesday, March 12, 2025

The Copyright Office takes on the sticky issue of artificial intelligence; Federal News Network, March 11, 2025

 Tom Temin, Federal News Network; The Copyright Office takes on the sticky issue of artificial intelligence

"Artificial intelligence raises storms of questions in every domain it touches. Chief among them, copyright questions. Now the U.S. Copyright Office, a congressional agency, has completed the second of two studies of AI and copyrights. This one deals with whether you can copyright outputs created using AI. Emily Chapuis, the Copyright Office’s deputy general counsel, joined the Federal Drive with Tom Temin to discuss...

Emily Chapuis: Yeah. That’s right. So we don’t recommend in the report that Congress take any action. And the reason for this is we think that copyright law is sufficiently flexible to deal with changes in technology. And that’s not just based on AI, but on the entire history of copyright law, has had to deal with these questions, whether it’s the development of the camera or the internet. The questions about copyright ability are always on a case-by-case basis. And the technology that’s used and how it’s used and what it’s used for are important elements of that. But the sort of defining legal principles aren’t different in this context than in those other ones.

Tom Temin: Right. So the human input idea then is kind of an eternal for copyright. How do you decide that? Is it a percentage of human input? Because the machine does a lot here. But you could say, ‘Well, the camera did a lot when it opened and closed the shutter and exposed silver halide. And then there was a machine process to produce that image. But it was the selection, the timing, the decisive moment.’ To quote Henri Cartier-Bresson, another French photographer. That’s really the issue here. The human input and not the machine input.

Emily Chapuis: Yeah, that’s right. And it’s hard to parse. I mean, we’ve had people ask, so what’s the percentage that has to be human created? And there’s not a clear answer to that, again, because it’s case by case. But also the question isn’t really amount as much as it is control. So who’s controlling the expression. And so one of the things that we try to explain is that even the same technology can be used in a variety of different ways. So you can use generative AI technology as a tool assistive to enhance the human expression or you can use it as a substitute for human expression. And so control is sort of the bottom line in terms of what we’re looking at to draw that distinction."

Wednesday, February 26, 2025

UK newspapers launch campaign against AI copyright plans; Independent, February 25, 2025

Martyn Landi, Independent; UK newspapers launch campaign against AI copyright plans

"Some of the UK’s biggest newspapers have used a coordinated campaign across their front pages to raise their concerns about AI’s impact on the creative industries.

Special wraps appeared on Tuesday’s editions of the Daily Express, Daily Mail, The Mirror, the Daily Star, The i, The Sun, and The Times – as well as a number of regional titles – criticising a Government consultation around possible exemptions being added to copyright law for training AI models.

The proposals would allow tech firms to use copyrighted material from creatives and publishers without having to pay or gain a licence, or reimbursing creatives for using their work."

Tuesday, February 25, 2025

Musicians release silent album to protest UK's AI copyright changes; Reuters, February 25, 2025

 , Reuters; Musicians release silent album to protest UK's AI copyright changes

"More than 1,000 musicians, including Kate Bush and Cat Stevens, on Tuesday released a silent album to protest proposed changes to Britain's copyright laws, which could allow tech firms to train artificial intelligence models using their work."

Thursday, February 20, 2025

AI and Copyright: Expanding Copyright Hurts Everyone—Here’s What to Do Instead; Electronic Frontier Foundation (EFF), February 19, 2025

TORI NOBLE, Electronic Frontier Foundation (EFF); AI and Copyright: Expanding Copyright Hurts Everyone—Here’s What to Do Instead


[Kip Currier: No, not everyone. Not requiring Big Tech to figure out a way to fairly license or get permission to use the copyrighted works of creators unjustly advantages these deep pocketed corporations. It also inequitably disadvantages the economic and creative interests of the human beings who labor to create copyrightable content -- authors, songwriters, visual artists, and many others.

The tell is that many of these same Big Tech companies are only too willing to file copyright infringement lawsuits against anyone whom they allege is infringing their AI content to create competing products and services.]


[Excerpt]


"Threats to Socially Valuable Research and Innovation 

Requiring researchers to license fair uses of AI training data could make socially valuable research based on machine learning (ML) and even text and data mining (TDM) prohibitively complicated and expensive, if not impossible. Researchers have relied on fair use to conduct TDM research for a decade, leading to important advancements in myriad fields. However, licensing the vast quantity of works that high-quality TDM research requires is frequently cost-prohibitive and practically infeasible.  

Fair use protects ML and TDM research for good reason. Without fair use, copyright would hinder important scientific advancements that benefit all of us. Empirical studies back this up: research using TDM methodologies are more common in countries that protect TDM research from copyright control; in countries that don’t, copyright restrictions stymie beneficial research. It’s easy to see why: it would be impossible to identify and negotiate with millions of different copyright owners to analyze, say, text from the internet."

Monday, February 17, 2025

Copyright battles loom over artists and AI; Financial Times, February 16, 2025

louise.lucas@ft.com, Financial Times ; Copyright battles loom over artists and AI

"Artists are the latest creative industry to gripe about the exploitative nature of artificial intelligence. More than 3,000 have written to protest against plans by Christie’s to auction art created using AI."

Wednesday, February 5, 2025

Google lifts its ban on using AI for weapons; BBC, February 5, 2025

 Lucy Hooker & Chris Vallance, BBC; Google lifts its ban on using AI for weapons

"Google's parent company has ditched a longstanding principle and lifted a ban on artificial intelligence (AI) being used for developing weapons and surveillance tools.

Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm".

In a blog post Google defended the change, arguing that businesses and democratic governments needed to work together on AI that "supports national security".

Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems."