Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Tuesday, April 7, 2026

The Copyright Act in the age of AI; Politico, April 6, 2026

AARON MAK , Politico; The Copyright Act in the age of AI

"The Copyright Act is reaching a major milestone this year, yet some legal scholars aren’t sure how well it will hold up in the age of artificial intelligence.

Stanford University held a summit on Friday to celebrate (and fret about) the 1976 act, which is the foundation of modern copyright law, as the 50th anniversary of its signing approaches in October. Academics advanced a number of proposals to update and reinterpret American copyright law, though several also warned against stretching it too far. The consensus: AI will reshape copyright whether we like it or not, and that it’s time to grapple with the implications."

Sunday, April 5, 2026

Claude's Constitution; Anthropic, January 21, 2026

 Anthropic, Claude's Constitution

Our vision for Claude's character

"Claude’s constitution is a detailed description of Anthropic’s intentions for Claude’s values and behavior. It plays a crucial role in our training process, and its content directly shapes Claude’s behavior. It’s also the final authority on our vision for Claude, and our aim is for all of our other guidance and training to be consistent with it.

Training models is a difficult task, and Claude’s behavior might not always reflect the constitution’s ideals. We will be open—for example, in our system cards—about the ways in which Claude’s behavior comes apart from our intentions. But we think transparency about those intentions is important regardless.

The document is written with Claude as its primary audience, so it might read differently than you’d expect. For example, it’s optimized for precision over accessibility, and it covers various topics that may be of less interest to human readers. We also discuss Claude in terms normally reserved for humans (e.g., “virtue,” “wisdom”). We do this because we expect Claude’s reasoning to draw on human concepts by default, given the role of human text in Claude’s training; and we think encouraging Claude to embrace certain human-like qualities may be actively desirable.

This constitution is written for our mainline, general-access Claude models. We have some models built for specialized uses that don’t fully fit this constitution; as we continue to develop products for specialized use cases, we will continue to evaluate how to best ensure our models meet the core objectives outlined in this constitution.

For a summary of the constitution, and for more discussion of how we’re thinking about it, see our blog post “Claude’s new constitution.”

Powerful AI models will be a new kind of force in the world, and people creating them have a chance to help them embody the best in humanity. We hope this constitution is a step in that direction.

We’re releasing Claude’s constitution in full under a Creative Commons CC0 1.0 Deed, meaning it can be freely used by anyone for any purpose without asking for permission.

Many people at Anthropic and beyond contributed to the creation of this document, as did several Claude models. Amanda Askell is the primary author and wrote the majority of the text. Joe Carlsmith wrote significant parts of many sections and played a core role in revising the text. Chris Olah, Jared Kaplan, and Holden Karnofsky made significant contributions to its content and development. More detailed contribution statement and acknowledgments below.

The preface and the acknowledgements are not part of the official constitution."

The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code; Observer, March 31, 2026

 , Observer; The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code

"Father Brendan McGuire is writing a novel about a disenchanted monk and his A.I. companion. He’s doing it with Claude. That detail—a Catholic priest using Anthropic’s chatbot to explore questions of faith and artificial consciousness—tells you something about where Silicon Valley’s moral reckoning has arrived. McGuire, 60, leads St. Simon Catholic Parish in Los Altos, Calif., a congregation that counts some of the Valley’s A.I. researchers among its members. Earlier this year, he and a group of faith leaders helped Anthropic shape the Claude Constitution, the set of guiding principles governing how its A.I. behaves.

He is not, in other words, an outside critic. He is something more complicated: a true believer in both God and technology, trying to hold them in the same hand. “I left the tech industry, but it never really left me,” McGuire told Observer...

McGuire wasn’t Anthropic’s only religious collaborator. Bishop Paul Tighe of the Vatican’s Dicastery for Culture and Education and Brian Patrick Green, a technology ethics director at Santa Clara University, also reviewed the Claude Constitution. Green and other Catholic scholars recently filed a federal court brief supporting Anthropic in its lawsuit against the U.S. government, which challenges the company’s effective blacklisting by the Pentagon after it refused to allow its A.I. systems to be used for autonomous warfare or domestic surveillance. The brief praised those ethical limits as “minimal standards of ethical conduct for technical progress.”...

Anthropic says its engagement with religious voices—part of a broader effort to engage a wide variety of communities to keep pace with technological acceleration—is only a beginning. The company plans to expand outreach beyond Catholic institutions to other religious leaders going forward."

Thursday, April 2, 2026

AI gaps in the boardroom are becoming a reputational risk; Axios, April 2, 2026

 Eleanor Hawkins, Axios; AI gaps in the boardroom are becoming a reputational risk

"The big picture: Companies across every industry are being forced into rapid AI-driven transformation, but many corporate boards lack the expertise to guide strategy, manage risk or communicate decisions credibly to stakeholders.

By the numbers: Only 39% of Fortune 100 boards have any form of AI oversight, such as committees, a director with AI expertise, or an ethics board, according to McKinsey research.


Another recent report found that only 13% of S&P 500 companies have at least one director with AI-related expertise.


Similarly, McKinsey's survey of directors found that 66% say their boards have "limited to no knowledge or experience" with AI, and nearly one in three say AI does not even appear on their agendas.


And a report from the National Association of Corporate Directors (NACD) found that only 17% have established an AI education plan for directors, and 6% have a dedicated committee to oversee AI.


Between the lines: Having an AI-savvy board is a major competitive advantage, according to a recent MIT study."

Wednesday, April 1, 2026

USPTO announces agentic AI-assisted evaluator for patent eligibility determinations; United States Patent and Trademark Office (USPTO), April 1, 2026

United States Patent and Trademark Office (USPTO) ; USPTO announces agentic AI-assisted evaluator for patent eligibility determinations

"As part of the U.S. Patent and Trademark Office's (USPTO) continued efforts to incorporate artificial intelligence (AI) into agency operations—first with the Artificial Intelligence Search Automated Pilot Program, or “ASAP!,” for patent prior art references followed by the Trademark Classification Agentic Codification Tool, or “Class ACT,” for trademark searching—the USPTO today announced the first-of-its-kind agentic AI tool to assist in patent eligibility determinations under 35 U.S.C. §101. 

America’s Innovation Agency’s new AI system, termed “McConaughey Agentic Tasking Technology Helping Examiner Workload,” or “MATTHEW,” for short, will help examiners tackle the thorniest of eligibility questions as to whether claims presented are an abstract idea or a patent-eligible invention. “MATTHEW will greatly enhance our ability to make the close calls—or any call, really—as I herewith also suspend all applicable precedent, including Desjardins, Alice, and Mayo,” said USPTO Director John A. Squires. “Basically, in terms of eligibility, if MATTHEW says your invention is ‘Alright, Alright, Alright,’ then it’s ‘Alright, Alright, Alright’ with the USPTO.” 

“Initially, we had some concerns that we would be introducing a three-part test in place of the two-part test under Alice and Mayo, but I think we’ll be al…um, okay,” he continued.

“We want to equip our examiners—the best in the world at what they do—with the best tools to assist them,” said Director Squires. “In fact, MATTHEW was selected after careful evaluation of best-in-breed offerings, including the ‘Binary Eligibility Engaged Translation Language Environment Joint User Interface Computational Evaluator,’ or ‘BEETLEJUICE,’” he stated. “But the coders had some issues in testing when they said the name three times. I hope they’ll be al…um, okay,” remarked the Director. 

When asked if the USPTO licensed its tool in light of famed actor McConaughey’s recent Name Image and Likeness (NIL) ‘non-traditional’ registrations, Director Squires retorted, “Well, he’s the one who said, ‘trademark yourself!’—I think the Founders would have wanted this.” When asked if he had heard from Mr. McConaughey’s lawyers, Director Squires produced an unintelligible, guttural chanting sound and began rhythmically beating his chest with his fist.

For more information on this trailblazing AI system, please visit the USPTO website."

Tuesday, March 31, 2026

Copyright Law in 2025: Courts begin to draw lines around AI training, piracy, and market harm; Reuters, March 16, 2026

 and  , Reuters; Copyright Law in 2025: Courts begin to draw lines around AI training, piracy, and market harm

"In 2025, U.S. courts issued the first substantive, merits-stage decisions addressing whether the use of copyrighted works to train generative artificial intelligence systems constitutes "fair use." Although these rulings do not settle all open questions — and in some respects highlight emerging judicial disagreements — they represent a significant inflection point in copyright law's response to large language models, image generators, and other foundation models.

Taken together, these cases establish early guideposts for AI developers, publishers, media companies, and enterprises deploying generative AI systems. Below, we summarize the most important copyright ​decisions and pending cases shaping the law in 2025...

Conclusion and recommendations

The ​2025 decisions reflect cautious but meaningful progress in defining how copyright law applies to generative AI. Courts are increasingly receptive to fair use arguments for training on lawfully acquired data, deeply skeptical of speculative market-harm claims, and uniformly intolerant of piracy. At the same time, cases involving direct competition, news content, and human likeness may test the limits of these early rulings."

Monday, March 30, 2026

Axios AI+DC Summit: Copyright protection in the AI era will be up to the courts, industry leaders say; Axios, March 27, 2026

 Julie Bowen, Axios ; Axios AI+DC Summit: Copyright protection in the AI era will be up to the courts, industry leaders say

"Washington, D.C. — As policymakers grapple with how to regulate AI, the hardest questions around copyright and fair use are being punted to the courts, according to governance, creator, and technology experts at an Axios expert voices roundtable.

The big picture: With Congress moving slowly and disagreements over policy, judges are becoming the primary deciders of how AI and the creators work together — or don't.


That's partly by necessity: "Fair use is incredibly complicated — case by case, fact specific," News/Media Alliance president and CEO Danielle Coffey said.


"Each case that we get … we start to get these new guideposts," Jones Walker partner Graham Ryan said.


Ryan said they expect at least three fair use decisions this year that will have implications for the broader AI-artist ecosystem.


Axios' Maria Curi and Ashley Gold moderated the March 25 discussion, which was sponsored by Adobe.

What they're saying: Legal uncertainty remains. For example, two courts within the same district, and during the same week, differed in the reasoning behind their rulings on similar matters of fair use and AI.


"There is a current, live controversy over … the extant understanding of the fourth factor in fair use, which is: Does the copy replace the market for the work?" said Kevin Bankston, senior adviser for the Center for Democracy & Technology.


Still, "we have been trying to support the process through the courts, because we think there is a really strong framework in copyright law for protecting artists right now," according to Public Knowledge president and CEO Chris Lewis."

Sunday, March 29, 2026

New Political Group to Push Trump’s A.I. Agenda in Midterms; The New York Times, March 29, 2026

, The New York Times; New Political Group to Push Trump’s A.I. Agenda in Midterms 

"A new political operation with strong ties to the Trump administration is preparing to spend big money to boost President Trump’s record on artificial intelligence.

The group, called Innovation Council Action, said on Sunday that it would spend at least $100 million this year on its activities. That will include a major advocacy push behind new A.I. policy guidelines unveiled by the White House this month that seek to block state laws regulating A.I. The group is organized as a nonprofit, but is likely to start a super PAC as part of that $100 million push. That structure would allow Innovation Council to help backers and attack opponents of Mr. Trump’s A.I. agenda...

Innovation Council, by contrast, is explicitly aligned with the Trump operation. It is led by Taylor Budowich, a longtime Trump political adviser who served as White House deputy chief of staff, and has the blessing of David Sacks, a White House official."


Meta’s court losses spell potential trouble for AI research, consumer safety; CNBC, March 29, 2026

 Jonathan Vanian , CNBC; Meta’s court losses spell potential trouble for AI research, consumer safety

"Over a decade ago, Meta then known as Facebook – hired social science researchers to analyze how the social network’s services were affecting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations. 

But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings from Meta’s internal research and documents seemed to contradict the way the company portrayed itself publicly. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way. 

Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies, like OpenAI and Anthropic, subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users and publishing their findings. 

With AI now getting outsized attention for the harmful effects it’s having on some users, those companies must ask if it’s in their best interest to continue funding research or to suppress it."

Friday, March 27, 2026

Mother and Daughter Rejected $26M Offer to Sell Farmland to Build 2,000-Acre Data Center, but Say Others Haven’t; People, March 26, 2026

 Karla Marie Sanford

, People ; Mother and Daughter Rejected $26M Offer to Sell Farmland to Build 2,000-Acre Data Center, but Say Others Haven’t

“They call us old stupid farmers, you know, but we’re not,” said Ida Huddleston, 82

"A Kentucky mother and daughter are continuing to open up about their decision to keep their farmland rather than accept a multi-million payout that could pave the way for a data center, which may still be happening anyway.

“My grandfather and great-grandfather and a whole bunch of family have all lived here for years, paid taxes on it, fed a nation off of it,” Delsia Bare told CBS affiliate WKRC. “Even raised wheat through the Depression and kept bread lines up in the United States of America when people didn’t have anything else.”

Bare and her 82-year-old mom Ida Huddleston own hundreds of acres of farmland outside Maysville, according to WKRC. Together, the two have rejected over $26 million to sell part of the farmland to an undisclosed Fortune 100 company."

Thursday, March 26, 2026

America's Newspapers emphasizes importance of protecting publishers’ intellectual property; Editor & Publisher, March 25, 2026

Staff | America's Newspapers , Editor & Publisher; America's Newspapers emphasizes importance of protecting publishers’ intellectual property

"America’s Newspapers has issued the following statement in response to the comprehensive national legislative framework on artificial intelligence released by the Trump administration...

Specifically, the framework affirms that the creative works and unique identities of American innovators, creators and publishers must be respected in the age of AI. At the same time, it recognizes that artificial intelligence systems require access to information to learn and improve, and proposes a balanced approach that both enables innovation and safeguards the rights of content creators.

“America’s Newspapers strongly supports the administration’s recognition that high-quality journalism and original content are essential to the continued strength of our democracy and economy,” said Matt McMillan, chair of America’s Newspapers and CEO of Press Publications."

White House Unveils A.I. Policy Aimed at Blocking State Laws; The New York Times, March 20, 2026

 , The New York Times; White House Unveils A.I. Policy Aimed at Blocking State Laws

The Trump administration on Friday released new guidelines for federal legislation on the technology, recommending some safeguards for children and consumer protections for energy costs.

"The White House on Friday released policy guidelines that called for blocking state laws regulating artificial intelligence, while also recommending some safeguards for children and consumer protections for energy costs.

Dozens of states have passed laws in recent months to regulate A.I., which has created concerns about the technology’s potential to steal jobs, push up energy prices and threaten national security. But President Trump has made clear U.S. companies should have mostly free rein in a global race to dominate the technology.

On Friday, the White House called on Congress to pass federal A.I. legislation to override the state laws. Among the Trump administration’s suggested measures, Congress would streamline the process for building data centers, the warehouses full of computers that power A.I. The framework also proposed guardrails to prevent the government from using the technology for censorship, as well as mandating A.I.-related work force training."

Is Big Tech Facing a Big Tobacco Moment?; The New York Times, March 26, 2026

 Andrew Ross SorkinBernhard WarnerSarah KesslerMichael J. de la MercedNiko Gallogly,Brian O’Keefe and , The New York Times; Is Big Tech Facing a Big Tobacco Moment?

Back-to-back courtroom losses have put technology giants, including Meta and Google, in uncertain territory as they face lawsuits and bans on teen users.

"Andrew here. Back in 2018, I moderated a panel at the World Economic Forum that included Marc Benioff of Salesforce. It was then that he essentially declared that Facebook was the modern-day equivalent of cigarettes, and that it and other social media companies should be regulated as such.

Well, Meta’s loss in court on Wednesday, in a case about whether its platforms were designed to be addictive to adolescents, may be a watershed. Investors don’t seem to be fazed — the company’s shares hardly moved after the verdict came out — but the decision could change the conversation around the company yet again. More below...

Some legal experts wonder if Big Tech is staring at a Big Tobacco moment, a reference to how cigarette makers had to overhaul their businesses — at a huge expense — after courts ruled that some of their products were addictive and harmful.

We’re in a new era, a digital era, where we have to rethink definitions for products based on which entities might have superior information to prevent these injuries and accidents,” Catherine Sharkey, a professor of law at N.Y.U., told The Times. She added that the “implications” of those verdicts were “very, very big.”

“This has potentially large impacts on other areas in tech, A.I. and beyond that,” Jessica Nall, a San Francisco lawyer who represents tech companies and executives, told The Wall Street Journal. “The floodgates are already open.”

Meta and Google plan to appeal. The companies have signaled that they will fight efforts to make them drastically redesign their products and algorithms."

We're All Copyright Owners. Welcome to the Mess That AI Has Created; CNET, March 23, 2026


Katelyn Chedraoui , CNET; We're All Copyright Owners. Welcome to the Mess That AI Has Created

Copyright is one of the most important legal issues in the age of AI. And yes, it affects you.

"You probably rarely, if ever, think about copyright law. But if you want to understand why there are so many lawsuits being filed against AI companies, knowing a bit about copyright law is key. And whether you know it or not, these issues affect you.

If you've ever written a blog post, taken a photo or created an original video, you're a copyright owner. That's most of us, which means that copyright law -- its protections, limitations and application -- is more relevant to you than you might've thought. Sadly, copyright in the age of generative AI is something of a mess."

Tuesday, March 24, 2026

Fostering ethical use of AI in K-12 education; Iowa Public Radio, March 20, 2026

 

, Iowa Public Radio; Fostering ethical use of AI in K-12 education

"The use of artificial intelligence in school has become more common since the launch of ChatGPT in late 2022. Today, a majority of U.S. teens say they use AI chatbots for school work, according to the Pew Research Center. 

On this episode of River to River, two Iowa-based educators who are working together in advancing ethical and human-centered approaches to artificial intelligence across K-12 education share their experiences. Iowa State University professor Evrim Baran is the project director of the Critical AI in Education Pathways Initiative, which launched a micro-credential course this month for educators. Chad Sussex founded the Winterset Community School District's AI task force, and has recently expanded into consulting for other school districts around the state.

Then we talk with Rebecca Winthrop, who coauthored a recent report that shares of the potential negative risks that generative AI poses to students, and what can be done to prevent them while maximizing the potential benefits of AI.

Guests:

  • Evrim Baran, ISU professor of educational technology and human-computer interaction and Helen LeBaron Hilton Chair, College of Health and Human Sciences
  • Chad Sussex, grades 7-12 assistant principal and AI task force leader, Winterset Community School District
  • Rebecca Winthrop, senior fellow and director of the Center for Universal Education, Brookings Institution"

Monday, March 16, 2026

How Trump Drove a Wedge Between Florida Republicans Over A.I.; The New York Times, March 16, 2026

 David McCabe and  , The New York Times; How Trump Drove a Wedge Between Florida Republicans Over A.I.

A Florida bill that would have regulated artificial intelligence, backed by Gov. Ron DeSantis, failed to gain traction after President Trump made it clear he did not want states to rein in the technology.

"Florida lawmakers failed to pass a sweeping bill aimed at reining in the power of artificial intelligence by the time their annual legislative session wrapped up Friday.

The legislation, known as an A.I. Bill of Rights, flopped even though Gov. Ron DeSantis, a Republican, had spent months championing it. The bill would have forced companies to disclose when they use A.I. chatbots to interact with consumers and forbidden the technology’s use in licensed mental health counseling, among other measures.

But Republicans in the Florida House of Representatives refused to take up the bill because of President Trump. Mr. Trump has visibly positioned himself as pro-A.I., signing executive orders to protect the tech industry and threatening states that try to regulate the technology. In recent weeks, the White House has communicated to state legislators around the country that it is wary of states regulating A.I., while Mr. Trump has reiterated his support for the technology in public."

Sunday, March 15, 2026

SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY; Pasadena Now, March 15, 2026

 Pasadena Now; SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY

A discussion today ties the 1818 novel's warnings about creator responsibility to contemporary debates over artificial intelligence, part of the city's One City, One Story program 

"Two centuries before algorithms began analyzing people’s dreams and predicting their crimes, Mary Shelley wrote a novel about a scientist who built something he could not control. That novel, “Frankenstein,” is the subject of a free discussion today at Hastings Branch Library, where presenter Rosemary Choate will connect its 207-year-old themes to the same questions about artificial intelligence that Pasadena’s citywide reading program is exploring all month.

The event, titled “Frankenstein: Myths and the Real Story?” is part of the Pasadena Public Library’s 24th annual One City, One Story program, which this year selected Laila Lalami’s “The Dream Hotel” — a dystopian novel about a woman detained because an algorithm, fed by data from her dreams, deemed her a future criminal. The library has organized a month of lectures, films and book discussions around the novel’s themes of surveillance, technology and freedom, and the Frankenstein session draws a direct line between Shelley’s 1818 tale and the anxieties at the center of Lalami’s story.

Choate, a comparative literature and humanities instructor and founder of the Pomona College Alumni Book Club, will lead the discussion at 3 p.m. She will examine themes including creator responsibility, the consequences of unchecked technological ambition and society’s rejection of the “creation” — questions the library’s event description calls “highly relevant to contemporary debates surrounding the development and governance of AI,” according to the Pasadena Public Library’s event listing.

Shelley published “Frankenstein; or, The Modern Prometheus” anonymously in 1818, when she was 20 years old. The novel tells the story of Victor Frankenstein, a young scientist who assembles a creature from dead body parts and recoils from what he has made. The creature, abandoned by its creator, becomes violent as it fails to find acceptance. The novel is widely considered one of the first works of science fiction.

The One City, One Story program, now in its 24th year, selects a single book each year for citywide reading and discussion. A 19-member committee of community volunteers, led by Senior Librarian Christine Reeder, chose “The Dream Hotel” for its exploration of surveillance, freedom and the reach of technology into private life. The program is sponsored by The Friends of the Pasadena Public Library and the Pasadena Literary Alliance.

The month of events culminates in a conversation with Lalami and Pasadena Public Library Director Tim McDonald on Saturday, March 21, at 2 p.m. at Pasadena Presbyterian Church, 585 E. Colorado Blvd. That event is also free and open to the public."

Music Copyright in the Gen AI Age: Where Are We Now?; Brooklyn Sports & Entertainment Law Blog, February 11, 2026

Sam Woods , Brooklyn Sports & Entertainment Law Blog; Music Copyright in the Gen AI Age: Where Are We Now?

"Imagine you are a musician who has dedicated years of your life creating an album or EP — tinkering with the production, revising lyrics, finding the perfect samples— and now, you have finally shared your art with the world and are thrilled with the project’s success. However, while scrolling on TikTok a few months later, you hear some familiar audio. Wait a minute, is that one of your songs? No… not quite, but why does it sound so similar? Turns out, the song was created using artificial intelligence (“AI”)."

AI is dressing up greed as progress on creative rights; Financial Times, March 14, 2026

, Financial Times; AI is dressing up greed as progress on creative rights

"At this week’s London Book Fair, a lot of people were walking around with one particular title wedged under their arms. Called Don’t Steal This Book, its pages are empty apart from the names of thousands of authors, including Kazuo Ishiguro and Richard Osman. It’s a chilling protest against the rampant theft of creative work by tech firms, which could leave future artists unable to earn a living."