Showing posts with label AI LLMs. Show all posts
Showing posts with label AI LLMs. Show all posts

Sunday, December 21, 2025

Proposal to allow use of Australian copyrighted material to train AI abandoned after backlash; The Guardian, December 19, 2025

 , The Guardian; Proposal to allow use of Australian copyrighted material to train AI abandoned after backlash

"The Productivity Commission has abandoned a proposal to allow tech companies to mine copyrighted material to train artificial intelligence models, after a fierce backlash from the creative industries.

Instead, the government’s top economic advisory body recommended the government wait three years before deciding whether to establish an independent review of Australian copyright settings and the impact of the disruptive new technology...

In its interim report on the digital economy, the commission floated the idea of granting a “fair dealing” exemption to copyright rules that would allow AI companies to mine data and text to develop their large language models...

The furious response from creative industries to the commission’s idea included music industry bodies saying it would “legitimise digital piracy under guise of productivity”."

Friday, October 17, 2025

Bridging Biology and AI: Yale and Google's Collaborative Breakthrough in Single-Cell RNA Analysis; Yale School of Medicine, October 15, 2025

 Naedine Hazell, Yale School of Medicine; Bridging Biology and AI: Yale and Google's Collaborative Breakthrough in Single-Cell RNA Analysis

"Google and Yale researchers have developed a more “advanced and capable” AI model for analyzing single-cell RNA data using large language models that is expected to “lead to new insights and potential biological discoveries.”

“This announcement marks a milestone for AI in science,” Google announced.

On social media and in comments, scientists and developers applauded the model—which Google released Oct. 15—as the much-needed bridge to make single-cell data accessible, or interpretable, by AI. 

Many scientists, including cancer researchers focusing on improving the outcomes of immunotherapies, have homed in on single-cell data to understand the mechanisms of disease that either protect, or thwart, its growth. But their efforts have been slowed by the size and complexity of data...

“Just as AlphaFold transformed how we think about proteins, we’re now approaching that moment for cellular biology. We can finally begin to simulate how real human cells behave—in context, in silico," van Dijk explained, following Google's model release. "This is where AI stops being just an analysis tool and starts becoming a model system for biology itself.”

An example of discoveries that could be revealed using this large-scale model with improved predictive power was tested by Yale and Google researchers prior to the release of the model. The findings will be shared in an forthcoming paper.

On Wednesday, the scaled-up model, Cell2Sentence-Scale 27B was released. The blog post concluded: “The open model and its resources are available today for the research community. We invite you to explore these tools, build on our work and help us continue to translate the language of life.”"

Monday, September 8, 2025

Class-Wide Relief:The Sleeping Bear of AI Litigation Is Starting to Wake Up; Intellectual Property & Technology Law Journal, October 2025

Anna B. Naydonov, Mark Davies and Jules Lee, Intellectual Property &Technology Law Journal; Class-Wide Relief:The Sleeping Bear of AI Litigation Is Starting to Wake Up

"Probably no intellectual property (IP) topic in the last several years has gotten more attention than the litigation over the use of the claimed copyrighted content in training artificial intelligence (AI) models.The issue of whether fair use applies to save the day for AI developers is rightfully deemed critical, if not existential, for AI innovation. But whether class relief – and the astronomical damages that may come with it – is available in these cases is a question of no less significance."

Saturday, June 7, 2025

Do AI systems have moral status?; Brookings, June 4, 2025

  , Brookings; Do AI systems have moral status?

"In March, researchers announced that a large language model (LLM) passed the famous Turing test, a benchmark designed by computer scientist Alan Turing in 1950 to evaluate whether computers could think. This follows research from last year suggesting that the time is now for artificial intelligence (AI) labs to take the welfare of their AI models into account."