Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Wednesday, July 16, 2025

Can Gen AI and Copyright Coexist?; Harvard Business Review, July 16, 2025

 and , Harvard Business Review ; Can Gen AI and Copyright Coexist?

"We’re experts in the study of digital transformation and have given this issue a lot of thought. We recently served, for example, on a roundtable of 10 economists convened by the U.S. Copyright Office to study the implications of gen AI on copyright policy. We recognize that the two decisions are far from the last word on this topic; both will no doubt be appealed to the Ninth Circuit and then subsequently to the Supreme Court. But in the meantime, we believe there are already many lessons to be learned from these decisions about the implications of gen AI for business—lessons that will be useful for leaders in both the creative industries and gen AI companies."

The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies; Task & Purpose, July 14, 2025

  , Task & Purpose; The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies

"The Pentagon announced Monday it is going to spend almost $1 billion on “agentic AI workflows” from four “frontier AI” companies, including Elon Musk’s xAI, whose flagship Grok appeared to still be declaring itself “MechaHitler” as late as Monday afternoon.

In a press release, the Defense Department’s Chief Digital and Artificial Intelligence Office — or CDAO — said it will cut checks of up to $200 million each to tech giants Anthropic, Google, OpenAI and Musk’s xAI to work on:

  • “critical national security challenges;”
  • “joint mission essential tasks in our warfighting domain;”
  • “DoD use cases.”

The release did not expand on what any of that means or how AI might help. Task & Purpose reached out to the Pentagon for details on what these AI agents may soon be doing and asked specifically if the contracts would include control of live weapons systems or classified information."

Thursday, July 10, 2025

EU's AI code of practice for companies to focus on copyright, safety; Reuters, July 10, 2025

 , Reuters ; EU's AI code of practice for companies to focus on copyright, safety

"The European Commission on Thursday unveiled a draft code of practice aimed at helping firms comply with the European Union's artificial intelligence rules and focused on copyright-protected content safeguards and measures to mitigate systemic risks.

Signing up to the code, which was drawn up by 13 independent experts, is voluntary, but companies that decline to do so will not benefit from the legal certainty provided to a signatory.

The code is part of the AI rule book, which will come into effect in a staggered manner and will apply to Google owner Alphabet, Facebook owner Meta, OpenAI, Anthropic, Mistral and other companies."

Microsoft Pledges $4 Billion Toward A.I. Education; The New York Times, July 9, 2025

  , The New York Times; Microsoft Pledges $4 Billion Toward A.I. Education


[Kip Currier: Not one mention of "ethics" or "AI ethics" in this New York Times article.

So, I sent an email to the reporter today (7/10/25):

Dear Natasha Singer,

I was surprised, and actually disconcerted, to not see any mention of "ethics" and "AI ethics" concepts in your article "Microsoft Pledges $4 Billion Toward A.I. Education". Given well-documented concerns of the vital need for ethical guidelines and frameworks vis-a-vis AI by a wide-range of stakeholders (e.g. religious leaders/Rome Call for AI Ethics, the U.N. AI Advisory Body, academics, etc.), I would have expected your reporting to at least have mentioned potential ethical considerations about this Microsoft funding plan, which carries such significant implications for education and societies.

Best wishes,

Kip Currier]

 

[Excerpt]

"Microsoft said on Wednesday that it planned to give more than $4 billion in cash and technology services to train millions of people to use artificial intelligence, amid an intensifying Silicon Valley crusade to embed chatbots into classrooms.

Microsoft, the maker of the Copilot chatbot, said the resources would go to schools, community colleges, technical colleges and nonprofits. The company is also starting a new training program, Microsoft Elevate Academy, to “deliver A.I. education and skilling at scale” and help 20 million people earn certificates in A.I.

“Microsoft will serve as an advocate to ensure that students in every school across the country have access to A.I. education,” Brad Smith, the president of Microsoft, said in an interview on Sunday.

Microsoft did not immediately specify how much of the more than $4 billion the company planned to dispense as grants and how much of it would be in the form of Microsoft A.I. services and cloud computing credits.

The announcement comes as tech companies are racing to train millions of teachers and students on their new A.I. tools. Even so, researchers say it is too soon to tell whether the classroom chatbots will end up improving educational outcomes or eroding important skills like critical thinking.

On Tuesday, the American Federation of Teachers, a union representing 1.8 million members, said it was setting up a national A.I. training center for educators, with $23 million in funding from Microsoft and two other chatbot makers, OpenAI and Anthropic."

Thursday, July 3, 2025

Cloudflare Sidesteps Copyright Issues, Blocking AI Scrapers By Default; Forbes, July 2, 2025

Emma Woollacott , Forbes; Cloudflare Sidesteps Copyright Issues, Blocking AI Scrapers By Default

"IT service management company Cloudflare is striking back on behalf of content creators, blocking AI scrapers by default.

Web scrapers are bots that crawl the internet, collecting and cataloguing content of all types, and are used by AI firms to collect material that can be used to train their models.

Now, though, Cloudflare is allowing website owners to choose if they want AI crawlers to access their content, and decide how the AI companies can use it. They can opt to allow crawlers for certain purposes—search, for example—but block others. AI companies will have to obtain explicit permission from a website before scraping."

Wednesday, July 2, 2025

Eminem, AI and me: why artists need new laws in the digital age; The Guardian, July 2, 2025

 , The Guardian; Eminem, AI and me: why artists need new laws in the digital age

"Song lyrics, my publisher informs me, are subject to notoriously strict copyright enforcement and the cost to buy the rights is often astronomical. Fat chance as well, then, of me quoting Eminem to talk about how Lose Yourself seeped into the psyche of a generation when he rapped: “You only get one shot, do not miss your chance to blow, this opportunity comes once in a lifetime.”

Oh would it be different if I were an AI company with a large language model (LLM), though. I could scrape from the complete discography of the National and Eminem, and the lyrics of every other song ever written. Then, when a user prompted something like, “write a rap in the style of Eminem about losing money, and draw inspiration from the National’s Bloodbuzz Ohio”, my word correlation program – with hundreds of millions of paying customers and a market capitalisation worth tens if not hundreds of billions of dollars – could answer:

“I still owe money to the money to the money I owe,

But I spit gold out my throat when I flow,

So go tell the bank they can take what they like

I already gave my soul to the mic.”

And that, according to rulings last month by the US courts, is somehow “fair use” and is perplexingly not copyright infringement at all, despite no royalties having been paid to anyone in the process."

Tuesday, July 1, 2025

The Court Battles That Will Decide if Silicon Valley Can Plunder Your Work; Slate, June 30, 2025

 BY  , Slate; The Court Battles That Will Decide if Silicon Valley Can Plunder Your Work

"Last week, two different federal judges in the Northern District of California made legal rulings that attempt to resolve one of the knottiest debates in the artificial intelligence world: whether it’s a copyright violation for Big Tech firms to use published books for training generative bots like ChatGPT. Unfortunately for the many authors who’ve brought lawsuits with this argument, neither decision favors their case—at least, not for now. And that means creators in all fields may not be able to stop A.I. companies from using their work however they please...

What if these copyright battles are also lost? Then there will be little in the way of stopping A.I. startups from utilizing all creative works for their own purposes, with no consideration as to the artists and writers who actually put in the work. And we will have a world blessed less with human creativity than one overrun by second-rate slop that crushes the careers of the people whose imaginations made that A.I. so potent to begin with."

AI companies start winning the copyright fight; The Guardian, July 1, 2025

 , The Guardian; AI companies start winning the copyright fight

"The lawsuits over AI-generated text were filed first, and, as their rulings emerge, the next question in the copyright fight is whether decisions about one type of media will apply to the next.

“The specific media involved in the lawsuit – written works versus images versus videos versus audio – will certainly change the fair-use analysis in each case,” said John Strand, a trademark and copyright attorney with the law firm Wolf Greenfield. “The impact on the market for the copyrighted works is becoming a key factor in the fair-use analysis, and the market for books is different than that for movies.”

To Strand, the cases over images seem more favorable to copyright holders, as the AI models are allegedly producing images identical to the copyrighted ones in the training data.

A bizarre and damning fact was revealed in the Anthropic ruling, too: the company had pirated and stored some 7m books to create a training database for its AI. To remediate its wrongdoing, the company bought physical copies and scanned them, digitizing the text. Now the owner of 7m physical books that no longer held any utility for it, Anthropic destroyed them. The company bought the books, diced them up, scanned the text and threw them away, Ars Technica reports. There are less destructive ways to digitize books, but they are slower. The AI industry is here to move fast and break things.

Anthropic laying waste to millions of books presents a crude literalization of the ravenous consumption of content necessary for AI companies to create their products."

Hollywood Confronts AI Copyright Chaos in Washington, Courts; The Wall Street Journal, July 1, 2025

Amrith Ramkumar,  Jessica Toonkel, The Wall Street Journal; Hollywood Confronts AI Copyright Chaos in Washington, Courts

Technology firms say using copyrighted materials to train AI models is key to America’s success; creatives want their work protected

Sunday, June 29, 2025

An AI firm won a lawsuit for copyright infringement — but may face a huge bill for piracy; Los Angeles Times, June 27, 2025

 Michael Hiltzik , Los Angeles Times; An AI firm won a lawsuit for copyright infringement — but may face a huge bill for piracy


[Kip Currier: Excellent informative overview of some of the principal issues, players, stakes, and recent decisions in the ongoing AI copyright legal battles. Definitely worth 5-10 minutes of your time to read and reflect on.

A key take-away, derived from Judge Vince Chhabria's decision in last week's Meta win, is that:

Artists and authors can win their copyright infringement cases if they produce evidence showing the bots are affecting their market. Chhabria all but pleaded for the plaintiffs to bring some such evidence before him: 

“It’s hard to imagine that it can be fair use to use copyrighted books...to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books.” 

But “the plaintiffs never so much as mentioned it,” he lamented.

https://www.latimes.com/business/story/2025-06-27/an-ai-firm-won-a-lawsuit-over-copyright-infringement-but-may-face-a-huge-bill-for-piracy]


[Excerpt]

"Anthropic had to acknowledge a troubling qualification in Alsup’s order, however. Although he found for the company on the copyright issue, he also noted that it had downloaded copies of more than 7 million books from online “shadow libraries,” which included countless copyrighted works, without permission. 

That action was “inherently, irredeemably infringing,” Alsup concluded. “We will have a trial on the pirated copies...and the resulting damages,” he advised Anthropic ominously: Piracy on that scale could expose the company to judgments worth untold millions of dollars...

“Neither case is going to be the last word” in the battle between copyright holders and AI developers, says Aaron Moss, a Los Angeles attorney specializing in copyright law. With more than 40 lawsuits on court dockets around the country, he told me, “it’s too early to declare that either side is going to win the ultimate battle.”...

With billions of dollars, even trillions, at stake for AI developers and the artistic community at stake, no one expects the law to be resolved until the issue reaches the Supreme Court, presumably years from now...

But Anthropic also downloaded copies of more than 7 million books from online “shadow libraries,” which include untold copyrighted works without permission. 

Alsup wrote that Anthropic “could have purchased books, but it preferred to steal them to avoid ‘legal/practice/business slog,’” Alsup wrote. (He was quoting Anthropic co-founder and CEO Dario Amodei.)...

Artists and authors can win their copyright infringement cases if they produce evidence showing the bots are affecting their market."...

The truth is that the AI camp is just trying to get out of paying for something instead of getting it for free. Never mind the trillions of dollars in revenue they say they expect over the next decade — they claim that licensing will be so expensive it will stop the march of this supposedly historic technology dead in its tracks.

Chhabria aptly called this argument “nonsense.” If using books for training is as valuable as the AI firms say they are, he noted, then surely a market for book licensing will emerge. That is, it will — if the courts don’t give the firms the right to use stolen works without compensation."

Friday, June 27, 2025

Getty drops copyright allegations in UK lawsuit against Stability AI; AP, June 25, 2025

 KELVIN CHAN, AP; Getty drops copyright allegations in UK lawsuit against Stability AI

"Getty Images dropped copyright infringement allegations from its lawsuit against artificial intelligence company Stability AI as closing arguments began Wednesday in the landmark case at Britain’s High Court. 

Seattle-based Getty’s decision to abandon the copyright claim removes a key part of its lawsuit against Stability AI, which owns a popular AI image-making tool called Stable Diffusion. The two have been facing off in a widely watched court case that could have implications for the creative and technology industries."

Wednesday, June 25, 2025

Judge dismisses authors’ copyright lawsuit against Meta over AI training; AP, June 25, 2025

 MATT O’BRIEN AND BARBARA ORTUTAY, AP; Judge dismisses authors’ copyright lawsuit against Meta over AI training

"Although Meta prevailed in its request to dismiss the case, it could turn out to be a pyrrhic victory. In his 40-page ruling, Chhabria repeatedly indicated reasons to believe that Meta and other AI companies have turned into serial copyright infringers as they train their technology on books and other works created by humans, and seemed to be inviting other authors to bring cases to his court presented in a manner that would allow them to proceed to trial.

The judge scoffed at arguments that requiring AI companies to adhere to decades-old copyright laws would slow down advances in a crucial technology at a pivotal time. “These products are expected to generate billions, even trillions of dollars for the companies that are developing them. If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders for it.”

Tuesday, June 24, 2025

The copyright war between the AI industry and creatives; Financial Times, June 23, 2025

 , Financial Times ; The copyright war between the AI industry and creatives

"One is that the government itself estimates that “creative industries generated £126bn in gross value added to the economy [5 per cent of GDP] and employed 2.4 million people in 2022”. It is at the very least an open question whether the value added of the AI industry will ever be of a comparable scale in this country. Another is that the creative industries represent much of the best of what the UK and indeed humanity does. The idea of handing over its output for free is abhorrent...

Interestingly, for much of the 19th century, the US did not recognise international copyright at all in its domestic law. Anthony Trollope himself complained fiercely about the theft of the copyright over his books."

Anthropic wins key US ruling on AI training in authors' copyright lawsuit; Reuters, June 24, 2025

, Reuters; Anthropic wins key US ruling on AI training in authors' copyright lawsuit

 "A federal judge in San Francisco ruled late on Monday that Anthropic's use of books without permission to train its artificial intelligence system was legal under U.S. copyright law.

Siding with tech companies on a pivotal question for the AI industry, U.S. District Judge William Alsup said Anthropic made "fair use" of books by writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson to train its Claude large language model.

Alsup also said, however, that Anthropic's copying and storage of more than 7 million pirated books in a "central library" infringed the authors' copyrights and was not fair use. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement."

Monday, June 23, 2025

Pope: Intelligence is seeking life's true meaning, not having reams of data; United States Conference of Catholic Bishops, June 20, 2025

 Carol Glatz , United States Conference of Catholic Bishops; Pope: Intelligence is seeking life's true meaning, not having reams of data

"Access to vast amounts of data and information is not the same thing as having intelligence, which is uniquely human and requires being open to truth, goodness and the real meaning of life, Pope Leo XIV told AI experts and executives.

"Authentic wisdom has more to do with recognizing the true meaning of life than with the availability of data," he said in a written message released by the Vatican June 20.

"Acknowledging and respecting what is uniquely characteristic of the human person is essential to the discussion of any adequate ethical framework for the governance of AI," he wrote.

The message, written in English, was addressed to people attending the second annual Rome conference on AI, Ethics and the Future of Corporate Governance being held in Rome and at the Vatican June 19-20.

The conference "brings together executives from leading AI companies as well as large enterprises using AI with policymakers, scholars, ethicists and lawyers to consider in a holistic way the challenges facing the ethics and governance of AI, both for companies developing this revolutionary technology as well as the enterprises incorporating AI into their businesses," according to the event's website."

The Pope has a message for AI executives; Quartz, June 20, 2025

 Michael Barclay, Quartz; The Pope has a message for AI executives

Pope Leo wants AI to be regulated ethically, while the U.S. is poised to bar any state-level regulations for a decade

"At the Second Annual Rome Conference on Artificial Intelligence on Friday, Pope Leo talked about where AI is headed.

The event was attended by Vatican officials, American academics, and Silicon Valley executives from Google, OpenAI, Anthropic, Meta, and more. The new pope urged serious reflection on “the inherently ethical dimension of AI, as well as its responsible governance...

Pope Leo said AI’s benefits and risks must be evaluated using a “superior ethical criterion,” adding that it “challenges all of us to reflect more deeply on the true nature and uniqueness of our shared human dignity.” He added that “access to data — however extensive — must not be confused with intelligence.”"

Sunday, June 22, 2025

Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican; CNN, June 20, 2025

  and , CNN ; Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican

"Pope Leo XIV says tech companies developing artificial intelligence should abide by an “ethical criterion” that respects human dignity.

AI must take “into account the well-being of the human person not only materially, but also intellectually and spiritually,” the pope said in a message sent Friday to a gathering on AI attended by Vatican officials and Silicon Valley executives.

“No generation has ever had such quick access to the amount of information now available through AI,” he said. But “access to data — however extensive — must not be confused with intelligence.”

He also expressed concern about AI’s impact on children’s “intellectual and neurological development,” writing that “society’s well-being depends upon their being given the ability to develop their God-given gifts and capabilities.”

That statement from the Pope came on the second of a two-day meeting for tech leaders in Rome to discuss the societal and ethical implications of artificial intelligence. The second annual Rome Conference on AI was attended by representatives from AI leaders including Google, OpenAI, Anthropic, IBM, Meta and Palantir along with academics from Harvard and Stanford and representatives of the Holy See.

The event comes at a somewhat fraught moment for AI, with the rapidly advancing technology promising to improve worker productivity, accelerate research and eradicate disease, but also threatening to take human jobsproduce misinformationworsen the climate crisis and create even more powerful weapons and surveillance capabilities. Some tech leaders have pushed back against regulationsintended to ensure that AI is used responsibly, which they say could hinder innovation and global competition.

“In some cases, AI has been used in positive and indeed noble ways to promote greater equality, but there is likewise the possibility of its misuse for selfish gain at the expense of others, or worse, to foment conflict and aggression,” Leo said in his Friday statement."

Thursday, June 12, 2025

In first-of-its-kind lawsuit, Hollywood giants sue AI firm for copyright infringement; NPR, June 12, 2025

 , NPR; In first-of-its-kind lawsuit, Hollywood giants sue AI firm for copyright infringement

"n a first-of-its-kind lawsuit, entertainment companies Disney and Universal are suing AI firm Midjourney for copyright infringement.

The 110-page lawsuit, filed Wednesday in a U.S. district court in Los Angeles, includes detailed appendices illustrating the plaintiffs' claims with visual examples and alleges that Midjourney stole "countless" copyrighted works to train its AI engine in the creation of AI-generated images."

Wednesday, June 11, 2025

Disney, Universal File First Major Studio Lawsuit Against AI Company, Sue Midjourney for Copyright Infringement: ‘This Is Theft’; Variety, June 11, 2025

 Todd Spangler, Variety; Disney, Universal File First Major Studio Lawsuit Against AI Company, Sue Midjourney for Copyright Infringement: ‘This Is Theft’

"Disney and NBCU filed a federal lawsuit Tuesday against Midjourney, a generative AI start-up, alleging copyright infringement. The companies alleged that Midjourney’s own website “displays hundreds, if not thousands, of images generated by its Image Service at the request of its subscribers that infringe Plaintiffs’ Copyrighted Works.”

A copy of the lawsuit is at this link...

Disney and NBCU’s lawsuit includes images alleged to be examples of instances of Midjourney’s infringement. Those include an image of Marvel’s Deadpool and Wolverine (pictured above), Iron Man, Spider-Man, the Hulk and more; Star Wars’ Darth Vader, Yoda, R2-D2, C-3PO and Chewbacca; Disney’s Princess Elsa and Olaf from “Frozen”; characters from “The Simpsons”; Pixar’s Buzz Lightyear from “Toy Story” and Lightning McQueen from “Cars”; DreamWorks’ “How to Train Your Dragon”; and Universal‘s “Shrek” and the yellow Minions from the “Despicable Me” film franchise."

Monday, June 9, 2025

Getty argues its landmark UK copyright case does not threaten AI; Reuters, June 9, 2025

, Reuters; Getty argues its landmark UK copyright case does not threaten AI

 "Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI began at London's High Court on Monday, with Getty rejecting Stability AI's contention the case posed a threat to the generative AI industry.

Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system, which can generate images from text inputs...

Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists.

Lawyers say Getty's case will have a major impact on the law, as well as potentially informing government policy on copyright protections relating to AI."