Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Sunday, March 29, 2026

New Political Group to Push Trump’s A.I. Agenda in Midterms; The New York Times, March 29, 2026

, The New York Times; New Political Group to Push Trump’s A.I. Agenda in Midterms 

"A new political operation with strong ties to the Trump administration is preparing to spend big money to boost President Trump’s record on artificial intelligence.

The group, called Innovation Council Action, said on Sunday that it would spend at least $100 million this year on its activities. That will include a major advocacy push behind new A.I. policy guidelines unveiled by the White House this month that seek to block state laws regulating A.I. The group is organized as a nonprofit, but is likely to start a super PAC as part of that $100 million push. That structure would allow Innovation Council to help backers and attack opponents of Mr. Trump’s A.I. agenda...

Innovation Council, by contrast, is explicitly aligned with the Trump operation. It is led by Taylor Budowich, a longtime Trump political adviser who served as White House deputy chief of staff, and has the blessing of David Sacks, a White House official."


Meta’s court losses spell potential trouble for AI research, consumer safety; CNBC, March 29, 2026

 Jonathan Vanian , CNBC; Meta’s court losses spell potential trouble for AI research, consumer safety

"Over a decade ago, Meta then known as Facebook – hired social science researchers to analyze how the social network’s services were affecting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations. 

But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings from Meta’s internal research and documents seemed to contradict the way the company portrayed itself publicly. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way. 

Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies, like OpenAI and Anthropic, subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users and publishing their findings. 

With AI now getting outsized attention for the harmful effects it’s having on some users, those companies must ask if it’s in their best interest to continue funding research or to suppress it."

Friday, March 27, 2026

Mother and Daughter Rejected $26M Offer to Sell Farmland to Build 2,000-Acre Data Center, but Say Others Haven’t; People, March 26, 2026

 Karla Marie Sanford

, People ; Mother and Daughter Rejected $26M Offer to Sell Farmland to Build 2,000-Acre Data Center, but Say Others Haven’t

“They call us old stupid farmers, you know, but we’re not,” said Ida Huddleston, 82

"A Kentucky mother and daughter are continuing to open up about their decision to keep their farmland rather than accept a multi-million payout that could pave the way for a data center, which may still be happening anyway.

“My grandfather and great-grandfather and a whole bunch of family have all lived here for years, paid taxes on it, fed a nation off of it,” Delsia Bare told CBS affiliate WKRC. “Even raised wheat through the Depression and kept bread lines up in the United States of America when people didn’t have anything else.”

Bare and her 82-year-old mom Ida Huddleston own hundreds of acres of farmland outside Maysville, according to WKRC. Together, the two have rejected over $26 million to sell part of the farmland to an undisclosed Fortune 100 company."

Thursday, March 26, 2026

America's Newspapers emphasizes importance of protecting publishers’ intellectual property; Editor & Publisher, March 25, 2026

Staff | America's Newspapers , Editor & Publisher; America's Newspapers emphasizes importance of protecting publishers’ intellectual property

"America’s Newspapers has issued the following statement in response to the comprehensive national legislative framework on artificial intelligence released by the Trump administration...

Specifically, the framework affirms that the creative works and unique identities of American innovators, creators and publishers must be respected in the age of AI. At the same time, it recognizes that artificial intelligence systems require access to information to learn and improve, and proposes a balanced approach that both enables innovation and safeguards the rights of content creators.

“America’s Newspapers strongly supports the administration’s recognition that high-quality journalism and original content are essential to the continued strength of our democracy and economy,” said Matt McMillan, chair of America’s Newspapers and CEO of Press Publications."

White House Unveils A.I. Policy Aimed at Blocking State Laws; The New York Times, March 20, 2026

 , The New York Times; White House Unveils A.I. Policy Aimed at Blocking State Laws

The Trump administration on Friday released new guidelines for federal legislation on the technology, recommending some safeguards for children and consumer protections for energy costs.

"The White House on Friday released policy guidelines that called for blocking state laws regulating artificial intelligence, while also recommending some safeguards for children and consumer protections for energy costs.

Dozens of states have passed laws in recent months to regulate A.I., which has created concerns about the technology’s potential to steal jobs, push up energy prices and threaten national security. But President Trump has made clear U.S. companies should have mostly free rein in a global race to dominate the technology.

On Friday, the White House called on Congress to pass federal A.I. legislation to override the state laws. Among the Trump administration’s suggested measures, Congress would streamline the process for building data centers, the warehouses full of computers that power A.I. The framework also proposed guardrails to prevent the government from using the technology for censorship, as well as mandating A.I.-related work force training."

Is Big Tech Facing a Big Tobacco Moment?; The New York Times, March 26, 2026

 Andrew Ross SorkinBernhard WarnerSarah KesslerMichael J. de la MercedNiko Gallogly,Brian O’Keefe and , The New York Times; Is Big Tech Facing a Big Tobacco Moment?

Back-to-back courtroom losses have put technology giants, including Meta and Google, in uncertain territory as they face lawsuits and bans on teen users.

"Andrew here. Back in 2018, I moderated a panel at the World Economic Forum that included Marc Benioff of Salesforce. It was then that he essentially declared that Facebook was the modern-day equivalent of cigarettes, and that it and other social media companies should be regulated as such.

Well, Meta’s loss in court on Wednesday, in a case about whether its platforms were designed to be addictive to adolescents, may be a watershed. Investors don’t seem to be fazed — the company’s shares hardly moved after the verdict came out — but the decision could change the conversation around the company yet again. More below...

Some legal experts wonder if Big Tech is staring at a Big Tobacco moment, a reference to how cigarette makers had to overhaul their businesses — at a huge expense — after courts ruled that some of their products were addictive and harmful.

We’re in a new era, a digital era, where we have to rethink definitions for products based on which entities might have superior information to prevent these injuries and accidents,” Catherine Sharkey, a professor of law at N.Y.U., told The Times. She added that the “implications” of those verdicts were “very, very big.”

“This has potentially large impacts on other areas in tech, A.I. and beyond that,” Jessica Nall, a San Francisco lawyer who represents tech companies and executives, told The Wall Street Journal. “The floodgates are already open.”

Meta and Google plan to appeal. The companies have signaled that they will fight efforts to make them drastically redesign their products and algorithms."

We're All Copyright Owners. Welcome to the Mess That AI Has Created; CNET, March 23, 2026


Katelyn Chedraoui , CNET; We're All Copyright Owners. Welcome to the Mess That AI Has Created

Copyright is one of the most important legal issues in the age of AI. And yes, it affects you.

"You probably rarely, if ever, think about copyright law. But if you want to understand why there are so many lawsuits being filed against AI companies, knowing a bit about copyright law is key. And whether you know it or not, these issues affect you.

If you've ever written a blog post, taken a photo or created an original video, you're a copyright owner. That's most of us, which means that copyright law -- its protections, limitations and application -- is more relevant to you than you might've thought. Sadly, copyright in the age of generative AI is something of a mess."

Tuesday, March 24, 2026

Fostering ethical use of AI in K-12 education; Iowa Public Radio, March 20, 2026

 

, Iowa Public Radio; Fostering ethical use of AI in K-12 education

"The use of artificial intelligence in school has become more common since the launch of ChatGPT in late 2022. Today, a majority of U.S. teens say they use AI chatbots for school work, according to the Pew Research Center. 

On this episode of River to River, two Iowa-based educators who are working together in advancing ethical and human-centered approaches to artificial intelligence across K-12 education share their experiences. Iowa State University professor Evrim Baran is the project director of the Critical AI in Education Pathways Initiative, which launched a micro-credential course this month for educators. Chad Sussex founded the Winterset Community School District's AI task force, and has recently expanded into consulting for other school districts around the state.

Then we talk with Rebecca Winthrop, who coauthored a recent report that shares of the potential negative risks that generative AI poses to students, and what can be done to prevent them while maximizing the potential benefits of AI.

Guests:

  • Evrim Baran, ISU professor of educational technology and human-computer interaction and Helen LeBaron Hilton Chair, College of Health and Human Sciences
  • Chad Sussex, grades 7-12 assistant principal and AI task force leader, Winterset Community School District
  • Rebecca Winthrop, senior fellow and director of the Center for Universal Education, Brookings Institution"

Monday, March 16, 2026

How Trump Drove a Wedge Between Florida Republicans Over A.I.; The New York Times, March 16, 2026

 David McCabe and  , The New York Times; How Trump Drove a Wedge Between Florida Republicans Over A.I.

A Florida bill that would have regulated artificial intelligence, backed by Gov. Ron DeSantis, failed to gain traction after President Trump made it clear he did not want states to rein in the technology.

"Florida lawmakers failed to pass a sweeping bill aimed at reining in the power of artificial intelligence by the time their annual legislative session wrapped up Friday.

The legislation, known as an A.I. Bill of Rights, flopped even though Gov. Ron DeSantis, a Republican, had spent months championing it. The bill would have forced companies to disclose when they use A.I. chatbots to interact with consumers and forbidden the technology’s use in licensed mental health counseling, among other measures.

But Republicans in the Florida House of Representatives refused to take up the bill because of President Trump. Mr. Trump has visibly positioned himself as pro-A.I., signing executive orders to protect the tech industry and threatening states that try to regulate the technology. In recent weeks, the White House has communicated to state legislators around the country that it is wary of states regulating A.I., while Mr. Trump has reiterated his support for the technology in public."

Sunday, March 15, 2026

SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY; Pasadena Now, March 15, 2026

 Pasadena Now; SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY

A discussion today ties the 1818 novel's warnings about creator responsibility to contemporary debates over artificial intelligence, part of the city's One City, One Story program 

"Two centuries before algorithms began analyzing people’s dreams and predicting their crimes, Mary Shelley wrote a novel about a scientist who built something he could not control. That novel, “Frankenstein,” is the subject of a free discussion today at Hastings Branch Library, where presenter Rosemary Choate will connect its 207-year-old themes to the same questions about artificial intelligence that Pasadena’s citywide reading program is exploring all month.

The event, titled “Frankenstein: Myths and the Real Story?” is part of the Pasadena Public Library’s 24th annual One City, One Story program, which this year selected Laila Lalami’s “The Dream Hotel” — a dystopian novel about a woman detained because an algorithm, fed by data from her dreams, deemed her a future criminal. The library has organized a month of lectures, films and book discussions around the novel’s themes of surveillance, technology and freedom, and the Frankenstein session draws a direct line between Shelley’s 1818 tale and the anxieties at the center of Lalami’s story.

Choate, a comparative literature and humanities instructor and founder of the Pomona College Alumni Book Club, will lead the discussion at 3 p.m. She will examine themes including creator responsibility, the consequences of unchecked technological ambition and society’s rejection of the “creation” — questions the library’s event description calls “highly relevant to contemporary debates surrounding the development and governance of AI,” according to the Pasadena Public Library’s event listing.

Shelley published “Frankenstein; or, The Modern Prometheus” anonymously in 1818, when she was 20 years old. The novel tells the story of Victor Frankenstein, a young scientist who assembles a creature from dead body parts and recoils from what he has made. The creature, abandoned by its creator, becomes violent as it fails to find acceptance. The novel is widely considered one of the first works of science fiction.

The One City, One Story program, now in its 24th year, selects a single book each year for citywide reading and discussion. A 19-member committee of community volunteers, led by Senior Librarian Christine Reeder, chose “The Dream Hotel” for its exploration of surveillance, freedom and the reach of technology into private life. The program is sponsored by The Friends of the Pasadena Public Library and the Pasadena Literary Alliance.

The month of events culminates in a conversation with Lalami and Pasadena Public Library Director Tim McDonald on Saturday, March 21, at 2 p.m. at Pasadena Presbyterian Church, 585 E. Colorado Blvd. That event is also free and open to the public."

Music Copyright in the Gen AI Age: Where Are We Now?; Brooklyn Sports & Entertainment Law Blog, February 11, 2026

Sam Woods , Brooklyn Sports & Entertainment Law Blog; Music Copyright in the Gen AI Age: Where Are We Now?

"Imagine you are a musician who has dedicated years of your life creating an album or EP — tinkering with the production, revising lyrics, finding the perfect samples— and now, you have finally shared your art with the world and are thrilled with the project’s success. However, while scrolling on TikTok a few months later, you hear some familiar audio. Wait a minute, is that one of your songs? No… not quite, but why does it sound so similar? Turns out, the song was created using artificial intelligence (“AI”)."

AI is dressing up greed as progress on creative rights; Financial Times, March 14, 2026

, Financial Times; AI is dressing up greed as progress on creative rights

"At this week’s London Book Fair, a lot of people were walking around with one particular title wedged under their arms. Called Don’t Steal This Book, its pages are empty apart from the names of thousands of authors, including Kazuo Ishiguro and Richard Osman. It’s a chilling protest against the rampant theft of creative work by tech firms, which could leave future artists unable to earn a living."

Saturday, March 14, 2026

Perspective: No copyright for AI-generated content; Northern Public Radio, March 13, 2026

David Gunkel, Northern Public Radio; Perspective: No copyright for AI-generated content

"What the courts actually decided is that neither the AI system nor the human who uses it counts as the author of the resulting work. Simply prompting ChatGPT or Claude to produce something isn’t considered the kind of creative activity that copyright law recognizes as authorship. And that creates an unexpected result. If neither the AI nor the human user is the author, then the work has no author at all. In effect, AI-generated images, music, and text become “orphan works”—creations that belong to no one. And that means that anyone can use them."

The Guardian view on changes to copyright laws: authors should be protected over big tech; The Guardian, March 13, 2026

 , The Guardian; The Guardian view on changes to copyright laws: authors should be protected over big tech

"In a scene that might have come from a dystopian novel, books were being stamped with “Human Authored” logos at this week’s London Book Fair. The Society of Authors described its labelling scheme as “an important sticking plaster to protect and promote human creativity in lieu of AI labelled content in the marketplace”.

Visitors to the fair were also being given copies of Don’t Steal This Book, an anthology of about 10,000 writers including Nobel laureate Kazuo Ishiguro, Malorie Blackman, Jeanette Winterson and Richard Osman, in which the pages are completely blank. The back cover states: “The UK government must not legalise book theft to benefit AI companies.” The message is clear: writers have had enough.

The fair comes the week before the government is due to deliver its progress report on AI and copyright, after proposals for a relaxation of existing laws caused outrage last year. Philippa Gregory, the novelist, described the plans for an “opt-out” policy, which puts the onus on writers to refuse permission for their work to be trawled, as akin to putting a sign on your front door asking burglars to pass by...

House of Lords report published last week lays out two possible futures: one in which the UK “becomes a world-leading home for responsible, legalised artificial intelligence (AI) development” and another in which it continues “to drift towards tacit acceptance of large-scale, unlicensed use of creative content”. One scenario protects UK artists, the other benefits global tech companies. To avoid a world of empty content, the choice is clear."

Why I’m Suing Grammarly; The New York Times, March 13, 2026

, The New York Times ; Why I’m Suing Grammarly

"Like all writers, I live by my wits. My ability to earn a living rests on my ability to craft a phrase, to synthesize an idea, to make readers care about people and places they can only access through words on a page. Grammarly hadn’t checked with me before using my name. I only learned that an A.I. company was selling a deepfake of my mind from an article online.

And it wasn’t just me. Superhuman — the parent company of Grammarly — made fake editor versions of a range of people, including the novelist Stephen King, the late feminist author bell hooks, the former Microsoft chief privacy officer Julie Brill, the University of Virginia data science professor Mar Hicks and the journalist and podcaster Kara Swisher.

At this point in a story about A.I. exploitation, I would normally bemoan the need for new laws to tackle the novel harms of a new technology. But in this case, there is an old law that’s able to do the job.

In my home state of New York, the century-old right of publicity law prohibits a person’s name or image from being used for commercial purposes without her consent. At least 25 states have similar publicity statutes. And now, I’m using this law to fight back. I am the lead plaintiff in a class-action lawsuit against Superhuman in the U.S. District Court for the Southern District of New York, alleging that it violated New York and California publicity laws by not seeking consent before using our names in a paid service...

In this global crisis of consent, we must grab hold of the few anchors we have for enforcement. The right of publicity is one of them, but it needs to be strengthened into a federal law — not just a patchwork of state laws. In some states, it applies only to advertising; in others, to all types of commercial uses. In some, it only covers celebrities; in others, it applies to everyone...

Denmark has taken a novel approach: proposing an amendment to copyright laws that would allow people to copyright their bodies, facial features and voices to protect against A.I. deepfakes. I’d be happy to copyright myself — as copyright seems to be the only law that is regularly enforced on the internet these days...

What Grammarly made wasn’t a doppelgänger. As the writer Ingrid Burrington wrote on Bluesky, it was a sloppelgänger — A.I. slop masquerading as a person.

And it must be stopped."

What Was Grammarly Thinking?; The Atlantic, March 12, 2026

 Kaitlyn Tiffany, The Atlantic ; What Was Grammarly Thinking?

A short-lived AI tool promised to help users write like the greats—and a bunch of other random people, including me.

"But in the age of generative AI, there are many new kinds of copying. For instance, Wired reported last week on a tool offered by Grammarly, which briefly offered users the opportunity to put their writing through something called “Expert Review.” This produced AI-generated advice purportedly from the perspective of a bunch of famous authors, a bunch of less-famous working journalists (including myself, per The Verge’s reporting), and a bunch of academics (including some who had recently died).

I say “briefly” because the company deactivated the feature today. A lot of people got really mad about it because none of the experts had agreed for their work to be used in such a way, or to serve as uncompensated marketing for an app that people use to help them write more legible emails. “We hear the feedback and recognize we fell short on this,” the company’s CEO, Shishir Mehrotra, wrote on his LinkedIn page yesterday. Not long after, Wired reported that one of the journalists whose name had been used in the feature, Julia Angwin, was filing a class-action lawsuit against Grammarly’s owner, Superhuman Platform. In a statement forwarded by a spokesperson, Mehrotra repeated apologies made in his LinkedIn post and added, "We have reviewed the lawsuit, and we believe the legal claims are without merit and will strongly defend against them.”...

Now that I’ve looked more closely at this not-very-useful feature, and now that it’s shut down, the whole situation seems a little absurd. This was just a weird and inappropriate thing that a company tried to do to make money without putting in very much effort. The primary reason it became a news story at all was that it touched on widespread anxiety about whose work is worth what, whose skills will continue to be marketable in the age of AI, and whether any of us are really as complex, singular, and impossible-to-imitate as we might hope we are."

Friday, March 13, 2026

Leveling Up or Losing Rights? Copyright Challenges of AI-Generated Content in Gaming; The National Law Review, March 12, 2026

  Nichole HaydenZahra AsadiNelson Mullins  Idea Exchange - Insights, The National Law Review; Leveling Up or Losing Rights? Copyright Challenges of AI-Generated Content in Gaming

"Artificial intelligence is quickly becoming part of the regulated gaming ecosystem. From electronic slot machines and casino games to online sportsbooks and betting platforms, AI is now used to assist with everything from game themes and visual design to user interfaces and marketing content. While these tools promise efficiency and faster development cycles, they also raise an important legal question for gaming companies: when AI is involved in creating game content, who actually owns the result?"

OpenAI sued for practicing law without a license; ABA Journal, March 6, 2026

 AMANDA ROBERT , ABA Journal; OpenAI sued for practicing law without a license

"OpenAI has been accused of practicing law without a license in a lawsuit brought by Nippon Life Insurance Co. of America. 

According to the insurer’s complaint, which was filed on Wednesday in the Northern District of Illinois, OpenAI’s artificial intelligence platform ChatGPT pushed a woman seeking disability benefits to breach a settlement agreement and file dozens of motions that “serve no legitimate legal or procedural purpose.”"

Wednesday, March 11, 2026

Introducing The Anthropic Institute; Anthropic, March 11, 2026

 Anthropic; Introducing The Anthropic Institute

"We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies. The Anthropic Institute will draw on research from across Anthropic to provide information that other researchers and the public can use during our transition to a world containing much more powerful AI systems.

In the five years since Anthropic began, AI progress has moved incredibly quickly. It took us two years to release our first commercial model, and just three more to develop models that can discover severe cybersecurity vulnerabilitiestake on a wide range of real work, and even begin to accelerate the pace of AI development itself.

We predict that far more dramatic progress will follow in the next two years. One of our company’s core convictions is that AI development is accelerating: that the improvements we make are compounding over time. Because of this, extremely powerful AI, like the kind our CEO Dario Amodei describes in Machines of Loving Grace, is coming far sooner than many think.

If this is right, society is shortly going to need to confront many massive challenges. How will powerful AI systems reshape our jobs and economies? What kinds of opportunities for greater societal resilience will they give us? What kinds of threats will they magnify or introduce? What are the expressed “values” of AI systems and how will society help companies determine what the appropriate values are? And, if the recursive self-improvement of AI systems does begin to occur, who in the world should be made aware, and how should these systems be governed?

The Anthropic Institute’s goal is to tell the world what we’re learning about these challenges as we build frontier AI systems, and to partner with external audiences to help address the risks we must confront. Whether our societies are able to do so will determine whether or not transformative AI delivers the radical upsides that we believe are possible in science, economic development, and human agency.

The Institute is led by our co-founder Jack Clark, who will assume a new role as Anthropic’s Head of Public Benefit. It has an interdisciplinary staff of machine learning engineers, economists, and social scientists, bringing together and expanding three of Anthropic’s research teams: the Frontier Red Team, which stress-tests AI systems to understand the outermost limits of their current capabilities; Societal Impacts, which studies how AI is being used in the real world; and Economic Research, which tracks its impact on jobs and the larger economy. The Institute will also incubate new teams, and is currently working on efforts around forecasting AI progress and better understanding how powerful AI will interact with the legal system.

The Institute has a unique vantage point: it has access to information that only the builders of frontier AI systems possess. It will use this to its full advantage, reporting candidly about what we’re learning about the shape of the technology we’re making. At the same time, the Institute is a two-way street. It will engage with workers and industries facing displacement, and with the people and communities who feel the future bearing down on them but are unsure how to respond. What we learn will inform what the Institute studies, and how our company as a whole chooses to act.

The Anthropic Institute has made several founding hires:

  • Matt Botvinick, a Resident Fellow at Yale Law School and previously Senior Director of Research at Google DeepMind and Professor in Neural Computation at Princeton, is joining the Institute to lead its work on AI and the rule of law.
  • Anton Korinek is joining the Economic Research team, on leave from his role as Professor of Economics at the University of Virginia, to lead an effort studying how transformative AI could reshape the very nature of economic activity.
  • Zoë Hitzig, who previously studied AI’s social and economic impacts at OpenAI, is joining to connect our economics work to model training and development."