Thursday, April 30, 2026

The Basics Of Copyright For Films; Forbes, April 30, 2026

Schuyler Moore,, Forbes; The Basics Of Copyright For Films

 "The film industry revolves around copyright, so an understanding of the basic issues relating to copyright is critical for understanding almost any film transaction."

The Secret Weapon Against AI Dominance; The Atlantic, April 30, 2026

Jacob Noti-Victor and Xiyin Tang, The Atlantic; The Secret Weapon Against AI Dominance

"More than 90 lawsuits have been filed by creators against AI companies for copyright infringement. Authors, musicians, visual artists, and news publishers have all accused firms such as OpenAI, Meta, and Anthropic of using their copyrighted works to train AI models without permission. (The Atlantic is involved in one such lawsuit, against the AI firm Cohere.) These cases are frequently framed as the defining fight over the future of creative labor and the entertainment industry as a whole. As one of these lawsuits put it, artists are seeking to end “infringement of their rights before their professions are eliminated by a computer program powered entirely by their hard work.”

But the future of creative labor will more likely be decided through a different question within copyright law, one that has received far less attention: To what extent should AI-generated works receive copyright protection at all? In a 2024 case, Thaler v. Perlmutter, the Court of Appeals for the District of Columbia held that a work generated autonomously by an AI system cannot be protected by copyright, because copyright requires a human “author.” The Supreme Court declined to review that decision in March. With the lower-court decision left in place, the question now becomes how much AI content can be incorporated into a work before it becomes mostly or totally uncopyrightable; courts have not yet weighed in on this but may soon.

The Thaler decision (and any future decisions that refine it) will have major economic consequences for the creative industries and the workers they employ."

Wednesday, April 29, 2026

Meet the AI jailbreakers: ‘I see the worst things humanity has produced’; The Guardian, April 29, 2026

  , The Guardian; Meet the AI jailbreakers: ‘I see the worst things humanity has produced’

"Tagliabue is softly spoken, clean-cut and friendly. He is in his early 30s but looks younger, almost too fresh-faced and enthusiastic to be in the trenches. He is not a traditional hacker or a software developer; his background is psychology and cognitive science. But he is one of the best “jailbreakers” in the world (some say the best): part of a diffuse new community that studies the art and science of fooling these powerful machines into outputting bomb-making manuals, cyber-attack techniques, biological weapon design and more. This is the new frontline in AI safety: not just code, but also words."

Americans are down on AI. These two caricatures are to blame.; The Washington Post, April 28, 2026

 

 and 
Sha Sajadieh
, The Washington Post; Americans are down on AI. These two caricatures are to blame.

"America is all-in on artificial intelligence. Americans are not. 

That is clear enough from domestic polling alone. But the 2026 AI Index, released this month by Stanford University’s Institute for Human-Centered AI, puts this skepticism into a global context. 

While the United States and China are nearly matched in their aggressive investment and economic stakes, they are worlds apart in public sentiment. About 84 percent of respondents in China say they are excited about AI, versus just 38 percent in the U.S., a gap with profound implications for how each country builds, adopts and governs the technology.

Public opinion does not merely reflect the AI debate. It decides whether democratic societies can govern the technology wisely, adopt it productively and distinguish between real risk and manufactured panic.

The U.S. now stands out with its distrust. Americans reported the lowest trust in their government to regulate AI responsibly of any country surveyed: just 31 percent. The global average was 54 percent. In Singapore, that number was 81 percent."

Lost copy of seventh-century poem in Old English discovered at Rome library; The Guardian, April 29, 2026

  , The Guardian; Lost copy of seventh-century poem in Old English discovered at Rome library

"“This discovery is a testament to the power of libraries to facilitate new research by digitising their collections and making them freely available online,” she said.

Andrea Cappa, head of manuscripts and rare books at the Rome library, said the institution was digitising holdings from Italy’s National Centre for the Study of the Manuscript, which will give researchers access to more than 40m images.

Riccardo Fangarezzi, head of archives at the abbey in Nonantola, said he looked forward to further discoveries. “The present times may be rather dark, yet such intellectual contributions are genuine rays of sunlight: the continent is less isolated,” he said.

The poet Paul Muldoon translated Caedmon’s Hymn into contemporary English in a 2016 anthology of British poetry. The opening lines read:

“Now we must praise to the skies, the Keeper of the heavenly kingdom,

The might of the Measurer, all he has in mind,

The work of the Father of Glory, of all manner of marvel.”"

Copyright Infringement Suits Loom With Unchecked AI Vibe Coding; Bloomberg Law, April 29, 2026

Christopher Suarez, Bill Toth, Anthony Pericolo, Bloomberg Law; Copyright Infringement Suits Loom With Unchecked AI Vibe Coding

"Deferring the job of software coding to artificial intelligence doesn’t immunize that code from copyright risk—it could even increase it, if the person directing the coding has limited oversight over the result.

This is particularly true with “vibe coding,” where developers use high‑level natural language prompts to generate code using AI models, often with limited manual review or modification of the resulting code.

Just as lawyers should check for “hallucinated” citations when writing with large language models, engineers and software development managers need to have human and technical monitoring protocols to account for infringement and licensing risks."

$100 Million Award Made in Suit Over Unlicensed Robert Indiana Art; The New York Times, April 24, 2026

, The New York Times; $100 Million Award Made in Suit Over Unlicensed Robert Indiana Art

"An art publisher accused in a civil suit of isolating Robert Indiana, the artist, in the final years of his life, has been found to have created unauthorized or adulterated versions of Indiana’s work by a New York jury.

The jury in federal court in Manhattan found in favor of Indiana’s former business partner, the Morgan Art Foundation, a for-profit company, that sued the publisher, Michael McKenzie, on the grounds that he had interfered with its rights by making Indiana works that he did not have the authority to produce.

Among the works cited in the suit were some based on the most famous of Indiana’s works, a depiction of the word “love” in capital letters, with the L and a jauntily tilted O perched atop the V and E. The image is recreated in sculptures that sit in plazas in several cities and on coffee mugs and refrigerator magnets worldwide.

The jury awarded Morgan $102.2 million in damages."

A.I. Bots Told Scientists How to Make Biological Weapons; The New York Times, April 29, 2026

 , The New York Times; A.I. Bots Told Scientists How to Make Biological Weapons

"Dr. Relman is part of a small group of experts enlisted by A.I. companies to vet their products for catastrophic risks. In recent months, some have shared with The Times more than a dozen chatbot conversations revealing that even publicly available models can do more than disseminate dangerous information. The virtual assistants have described in lucid, bullet-pointed detail how to buy raw genetic material, turn it into deadly weapons and deploy them in public spaces, the transcripts show. Some have even brainstormed ways to evade detection."

Tuesday, April 28, 2026

(Some of) The newest stuff at the Library!; Library of Congress Blogs, April 28, 2026

 Neely Tucker, Library of Congress Blogs; (Some of) The newest stuff at the Library!

"Walk into the Library’s annual showcase of new acquisitions and the question always hits you right in the face: Where to start?

What about with this slim copy of Silver Surfer No. 1, the origin story of Marvel Comics’ “Sentinel of the Spaceways,” from the groovy year of 1968? How about this massive law book that’s more than 500 years old? The “Tombstone Edition” of a Philadelphia newspaper from 1765, which documented and amplified the American Colonies loathing of the Stamp Act and presaged the American Revolution?

There’s never really a wrong place to start. This year’s two-hour show-and-tell, held last week, brought hundreds of staffers and guests to look over intriguing displays of the Library’s recently acquired treasures, items spanning the nation, the globe and centuries of time. Many added to already impressive collections of historic figures...

It was a crowded, noisy, upbeat afternoon of discovery and explanation. Conversations buzzed and overlapped; staff experts and curious viewers leaned over display tables from opposite sides, heads together, talking loudly to be heard, gazing down at maps, manuscripts, records, artifacts and things you couldn’t have known existed."

‘The cost of compute is far beyond the costs of the employees’: Nvidia executive says right now AI is more expensive than paying human workers; Fortune, April 28, 2026

 , Fortune ; ‘The cost of compute is far beyond the costs of the employees’: Nvidia executive says right now AI is more expensive than paying human workers

"Recent tech layoffs would initially appear to indicate the great labor shift from human workers to AI may already be happening."

Celebrating World IP Day 2026: Sports, Innovation and Intellectual Property; JDSupra, April 24, 2026

Baker,Hostetler, Jeffrey Lyons, JDSupra; Celebrating World IP Day 2026: Sports, Innovation and Intellectual Property

"Another year, another opportunity to celebrate intellectual property (IP) on World Intellectual Property Day! This year, the World Intellectual Property Organization turns the global spotlight on “IP and Sports: Ready, Set, Innovate,” highlighting how IP rights support innovation, creativity and investment in sports...

As sports continue to intersect with artificial intelligence, advanced data analytics, immersive media and global brands, IP considerations will only grow in importance. World IP Day is a reminder that innovation does not happen in isolation; it depends on legal structures that reward creativity while enabling responsible growth.

Happy World IP Day 2026!"

Taylor Swift files to trademark her voice, likeness to ward off AI deepfakes; Reuters, April 27, 2026

 , Reuters; Taylor Swift files to trademark her voice, likeness to ward off AI deepfakes

"Pop superstar Taylor Swift filed trademark applications for two audio clips and one image of ‌herself in what a trademark attorney said is an attempt to protect her voice and likeness from deepfake videos and audio created by artificial intelligence.

The applications were filed with the U.S. Patent and Trademark Office on Friday and list Swift's TAS ​Rights Management as being the owner of the audio clips and image."

Pompeii archaeologists use AI to reconstruct man killed in volcano's eruption; Associated Press via NPR, April 28, 2026

 Associated Press via NPR; Pompeii archaeologists use AI to reconstruct man killed in volcano's eruption

"Archaeologists and researchers at the ancient Roman site of Pompeii have used artificial intelligence for the first time to digitally reconstruct the face of a man killed in the AD 79 eruption of Mount Vesuvius that smothered the city, offering a new way to understand one of history's most famous natural disasters.

The digital portrait represents a man whose remains, along with those of another person, were discovered as they attempted to flee the city toward the coast of what is now Italy during the volcanic eruption. Researchers believe the man died early in the disaster, during a heavy fall of volcanic debris...

The digital portrait was created using AI and photo-editing techniques designed to translate skeletal and archaeological data into a realistic human likeness.

"The vastness of archaeological data is now such that only with the help of artificial intelligence will we be able to adequately protect and enhance them. If used well, AI can contribute to a renewal of classical studies," Pompeii park director Gabriel Zuchtriegel said in a statement.

The project aims to make archaeological research more accessible and emotionally engaging for the public while maintaining a scientific foundation, researchers said."

Printify Releases Guide on How to Avoid Copyright Infringement with T-shirts; The National Law Review, April 28, 2026

Press Release, The National Law Review; Printify Releases Guide on How to Avoid Copyright Infringement with T-shirts

"Printify, a leading print-on-demand platform, has announced the release of a comprehensive new guide designed to help entrepreneurs understand how to avoid copyright infringement when creating and selling custom apparel. As the t-shirt business continues to attract new creators, the risk of legal missteps—ranging from cease-and-desist letters to costly lawsuits—has become a major concern across the industry.

The guide delivers a clear, practical breakdown of intellectual property rules, helping sellers navigate the complexities of copyright, trademark, and publicity rights. By combining legal fundamentals with actionable advice, Printify aims to give entrepreneurs the confidence to create and scale their businesses without unnecessary risk.

Launching a t-shirt business has never been more accessible, but legal awareness remains one of the most overlooked aspects of success. With this release, Printify places itself at the center of a safer, more informed approach to building apparel brands."

Monday, April 27, 2026

Trump’s anti-DEI movement comes for AI; Politico, April 27, 2026

 AARON MAK , Politico ; Trump’s anti-DEI movement comes for AI

"The legal crusade against affirmative action is coming for artificial intelligence.

On Friday, the Justice Department intervened in xAI’s challenge to Colorado’s “Consumer Protections for Artificial Intelligence” law. In its complaint, the DOJ argues the law’s provisions curbing algorithmic bias violates people’s 14th Amendment right to be treated equally under the law.

The intervention is in some ways an outgrowth of the movement to eradicate all race-conscious policies after the landmark Supreme Court case Students for Fair Admissions v. Harvard in 2023 struck down affirmative action in college admissions."

Musk’s lawsuit against OpenAI seen as a ‘test case’ for AI ethics; The Christian Science Monitor, April 27, 2026

  , The Christian Science Monitor; Musk’s lawsuit against OpenAI seen as a ‘test case’ for AI ethics

"A dispute between ChatGPT’s parent company, OpenAI, and one of the company’s founders – billionaire and tech entrepreneur Elon Musk – will play out in a federal court in Oakland, California, beginning April 27. 

Mr. Musk, who left the company in 2018, is suing OpenAI, claiming its leaders manipulated him into thinking he was contributing money to a nonprofit. He wants the company returned to its nonprofit status and seeks monetary compensation. 

OpenAI says Mr. Musk, who has since raised billions through the launch of his own for-profit company xAI, is misrepresenting facts to gain a competitive edge."

Decoding the 2026 White House AI Blueprint: U.S. AI Policy Starts to Take Shape; ReedSmith, March 24, 2026

Tristan J. Albrecht, ReedSmith; Decoding the 2026 White House AI Blueprint: U.S. AI Policy Starts to Take Shape

"The White House's March 2026, National Policy Framework for Artificial Intelligence highlights a central tension: while AI adoption is accelerating, the United States still lacks a comprehensive federal AI regulatory regime. The framework sets out legislative recommendations aimed at balancing innovation, economic growth, and risk mitigation, while proposing federal preemption of state laws that “impose undue burdens" or undermine the national strategy to achieve “global AI dominance”.

The White House framework focuses on seven priority areas:...

Intellectual Property: A measured approach that defers key copyright questions, whether AI training on copyrighted material constitutes fair use in the courts. The Administration states it “believe that training of AI models on copyrighted material does not violate copyright laws” but supports judicial resolution. The framework also contemplates collective licensing frameworks and protections against unauthorized digital replicas of individuals’ voice or likeness...

As AI capabilities rapidly evolve, the White House framework signals a federal preference for light-touch regulation and industry standards over rigid compliance mandates in clear contrast to approaches like the EU AI Act. In the absence of comprehensive legislation, organizations must continue navigating a dynamic and fragmented regulatory landscape, with careful attention to how preemption may reshape the field."


From LLMs to hallucinations, here’s a simple guide to common AI terms; TechCrunch, April 12, 2026

, TechCrunch; From LLMs to hallucinations, here’s a simple guide to common AI terms

"Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles.
We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks."

A town of 7,000 planned so many data centers, it’s like adding 51 Walmarts; The Washington Post, April 26, 2026

 , The Washington Post ; A town of 7,000 planned so many data centers, it’s like adding 51 Walmarts

"Throughout Archbald, a northeastern Pennsylvania town of 7,000 people tucked in a valley near the Pocono Mountains, residents are asking similar questions as the community emerges as one of the latest frontiers in the nation’s increasingly chaotic battles over data centers.

Developers plan to build six of the sprawling campuses in Archbald to power the demand for artificial intelligence, eventually covering about 14 percent of the town’s land. Those campuses would include 51 data warehouses — each about the size of a Walmart Supercenter — including seven buildings encompassing more than a million square feet near Bachak’s home...

Three of the four council members who resigned have now been replaced by data center opponents, with one seat still vacant.

It could be months or years before any data centers are built in Archbald. Once plans are approved by the local planning board, state and local permits are needed before construction can start...

Larry West, a local activist and new borough council member, said the tree cutting revived the “wounds” and “hidden scars” in a community where it took decades for the coal dust to be cleared. The town’s trees, West noted, cover abandoned mines.

“Now, it’s happening again but this time it’s data centers,” he added.

Bachak also believes his property will never be the same, even if the Project Gravity site is never completed. He recently installed blinds on his enclosed patio in an attempt to dull the pain he felt whenever he looked out at what used to be the forest lining his backyard.

“No one wants this,” Bachak said, “except the people making money off it.”"

Sunday, April 26, 2026

Braiding knowledge: how Indigenous expertise and western science are converging; The Guardian, April 4, 2026

 , The Guardian; Braiding knowledge: how Indigenous expertise and western science are converging

"Rather than dismissing Indigenous knowledge, more western scientists are discovering its viability for themselves and adjusting their research goals to embrace it.

That represents a “massive shift”, according to Kyle Whyte, a professor of environmental justice at the University of Michigan and a member of the Citizen Potawatomi Nation. Historically, western scientists have considered themselves rigorous and empirical, while they have classified traditional Native thought as mythic, religious or plain made-up, he said.

In fact, a long-overdue “braiding” of Native and western knowledge is becoming ever more common. Prominent Native authors such as Vine Deloria Jr have pointed out Native environmental practices in books for popular audiences. They’ve theorized, as the Alaskan native scholar Oscar Kawagley described it, “native ways of knowing”. More Indigenous people – Robin Wall Kimmerer, author of Braiding Sweetgrass, is a notable example – are entering academia and changing it from the inside, while some tribal nations have hired their own scientists. Non-Native institutions are seeking to undo their erasure of Indigenous cultures; the Brooklyn Botanic Garden has started to include labeling that highlights Lenape names and uses for food plants like persimmons. International environmental organizations also increasingly recognize the importance of including Indigenous voices in discussions around the climate crisis. Since 2022, there’s even been federal funding to study ways to combine Indigenous and western sciences, so each part remains distinct while being strengthened by the other."

This Is How We Get Moral A.I. Companies; The New York Times, April 26, 2026

 The New York Times; This Is How We Get Moral A.I. Companies

"Artificial intelligence can be wondrous, but the technology underneath is more than a little monstrous. It eats up all the words in the world, from blogs to books, often without permission. It burns whole forests’ worth of energy, digesting that raw material into its models, and gulps billions of gallons of water to cool down. These are the same qualities we perceive in Godzilla, but distributed. Is it any wonder that the Japanese word “kaiju,” or strange beast, has “AI” smack in the middle?...

The entire culture of American technology is built around two terms: disruption and, of course, scale. But ethics are constraints on disruption and scale. Truly ethics-bound organizations — the U.S. justice system, the American Medical Association, the Catholic priesthood — have hard scaling limits. Their rules run deep, and their requirements to serve are so onerous that only a few people can do the job. Punishments for transgressors include losing their licenses, being defrocked and being disbarred. Software industry people might have good degrees and are often good people, but they are making it up as they go along. They take no oath, are inconsistently certified and can only be fired, not exiled from the trade."

Teen, 14, Invents AI-Powered Device to Help Detect, and Potentially Treat, Crossed Eyes; People, April 26, 2026

 Toria Sheffield, People; Teen, 14, Invents AI-Powered Device to Help Detect, and Potentially Treat, Crossed Eyes

 "An 8th grader in California has invented an AI-powered device to help detect — and potentially treat — strabismus, a condition commonly known as crossed eyes.

Aaryan Balani of Cerritos said he opted to develop the device since he personally suffers from strabismus. The 14-year-old developed the condition after bumping his head when he was five years old...  

The young science aficionado decided to develop EYEVA, a device that looks like a visor and alerts the wearer when their eye begins to wander.

"It will beep … and you're like, ‘Okay, now I need to be aware of my face," Balani explained, adding that, in theory, it could help the wearer permanently retrain their eyes.

Balani said he developed the device with a 3D printer, small cameras and AI. It went through five different prototypes and four months of tweaking."

Devious New AI Tool “Clones” Software So That the Original Creator Doesn’t Hold a Copyright Over the New Version; Futurism, April 26, 2026

 , Futurism; Devious New AI Tool “Clones” Software So That the Original Creator Doesn’t Hold a Copyright Over the New Version

"The advent of generative AI continues to undermine the very concept of copyright, from entire books shamelessly ripping off authors to tasteless AI slop depicting beloved characters going viral on social media. The sin is foundational: all today’s popular AI tools were built by pillaging copyrighted material without permission.

Even software isn’t safe. As 404 Media reports, a new tool dubbed Malus.sh — pronounced “malice,” to give a subtle clue where this is headed — uses AI to “liberate” a piece of software from existing copyright licenses, essentially creating a “clean room” clone that technically doesn’t infringe on the original code’s copyright."

To teach in the time of ChatGPT is to know pain; Ars Technica, April 13, 2026

 SCOTT K. JOHNSON , Ars Technica; To teach in the time of ChatGPT is to know pain

"Let me explain why students are the ones losing the most in this environment and why instructors like me feel pretty much powerless to fix the problem.

Do or do not, there is no AI

Students often carry misconceptions about coursework. They may view an instructor as an opponent standing in the way of the grade they want. And they see “getting the right answers” as the goal of education because that’s how you secure that grade.

But that’s no more true than thinking that logging a count of reps is the goal of bodybuilding. The hard work of lifting weights is the point because that yields physical results. A popular analogy is that using an LLM to write your essay is like driving a forklift into the weight room. Weights get lifted, sure, but nothing is accomplished. I’m not hoping you can answer the exam question for me—I don’t need your essay to get me out of a jam. The process of doing the work was what you needed to walk away with something.

In a recent video about how easy Sora has made it for users to generate relatively realistic but deeply problematic videos, Hank Green rubbed his eyes as he shouted in the figurative direction of OpenAI CEO Sam Altman, “The friction matters, Sam!”...

I’m not alone in feeling exasperated by this predicament. A survey of about 3,000 college faculty showed that 85 percent felt LLMs “make students less likely to develop critical thinking abilities,” and 72 percent reported challenges managing LLM use.

Predictably, the response from higher education administrators―who are busy signing contracts for institutional LLM subscriptions to show how future-first their thought leadership is―has been to tell instructors that their job is to teach students “how to use AI effectively.”...

A few months ago, I overheard some college students talking about their classes. One was complaining about an assignment they needed to do that night, and another incredulously asked why they wouldn’t just have ChatGPT do it. The first replied, “This is my major, I actually need to learn stuff in this class. I use AI for my other classes.”"

Saturday, April 25, 2026

'Too Dangerous to Release' Is Becoming AI's New Normal; Time, April 24, 2026

 Nikita Ostrovsky, Time; 'Too Dangerous to Release' Is Becoming AI's New Normal

 "On April 16, OpenAI announced GPT-Rosalind, a new AI model targeted at the life sciences. It significantly outperforms their current publicly available models in chemistry and biology tasks, as well as experimental design. As with Anthropic’s Claude Mythos and OpenAI’s GPT-5.4-Cyber, also released this month, the model is not available to the general public—reserved, at least initially, for “qualified customers” through a “trusted access program.” 

The releases signal a new and concerning trend of AI companies deeming their most capable models too powerful to entrust to the general public. “I think frontier developers are restricting access to their most capable models because they are genuinely worried about some of the capabilities these models have,” says Peter Wildeford, head of policy at the AI Policy Network, an advocacy group. 

It is unclear why OpenAI decided to restrict access to GPT-Rosalind in particular. An OpenAI spokesperson said in an email that giving access to trusted partners allows the company to “make more capable systems available sooner to verified users, while still managing risk thoughtfully.”

Who decides? 

The rapid advance of AI capabilities raises the question of whether private companies should be making the increasingly weighty decisions about whether and how potentially dangerous AI models should be built, and who should be allowed to use them."

Trump ousts National Science Board members; The Washington Post, April 25, 2025

  , The Washington Post; Trump ousts National Science Board members

"Multiple scientists who serve on an independent board established to guide the nation’s nearly $9 billion basic science funding agency were terminated from their positions Friday by President Donald Trump.

Members of the National Science Board, which helps govern the National Science Foundation, were dismissed in a message from the Presidential Personnel Office thanking them for their service, according to screenshots shared with The Washington Post: “On behalf of President Donald J. Trump, I’m writing to inform you that your position as a member of the National Science Board is terminated, effective immediately.”

The National Science Board was established in 1950 to guide the governance of the National Science Foundation, in an unusual structure within the federal government that echoes the setup of a company board in the private sector. It helps guide an agency that operates Antarctic research stations, telescopes, a fleet of research vessels and supports basic science research in laboratories across the United States.

The NSF has a long history of supporting technology and research that powers many innovations the world relies on today. The agency helped language-learning app Duolingo get its start. NSF research has also helped evolve technology used in MRIs, cellphones and LASIK eye surgery.

The board’s members are scientists and engineers from universities and industry and are appointed by the president, but they serve six-year terms, ensuring overlap between different administrations. There are typically 25 members, but some slots are empty — including the NSF director, which has been vacant since the former director who was appointed during the first Trump administration, Sethuraman Panchanathan abruptly resigned a year ago."

Your Patent Will Expire. Here’s What You Need to Do Next to Keep Innovating Legally.; Entrepreneur, April 24, 2026

 

THOMAS FRANKLIN|EDITED BY CHELSEA BROWN, Entrepreneur; Your Patent Will Expire. Here’s What You Need to Do Next to Keep Innovating Legally.

"Lasting protection comes not from one filing, but from a pipeline of innovation supported by a structured patent portfolio — most often built through multiple patent families. A patent family links related applications around a common inventive core with interlocking priority claims. Early filings anchor protection, while later filings capture details in line with the market as it evolves."

Q&A: In the age of AI, what is a library for?; UVAToday, April 15, 2026

Alice Berry , UVAToday; Q&A: In the age of AI, what is a library for?

"Q. Where do you fall on the AI enthusiast to AI detractor spectrum?

A. A faculty member at another university asked me recently whether it was defensible to ban AI in her course. I said yes.

That probably isn’t what people expect from someone who spent the last three years building a framework for AI literacy. But it was the honest answer for now. She believed her students needed to develop a specific skill that AI use would short-circuit, and banning it was the right call for that course.

What I would ask of faculty who choose that path is to stay open, keep up with how the technology is developing, and be willing to try approaches others have tested. That is part of what the lab is for: to produce case studies that give faculty something real to work from when they are ready to revisit the question.

I’m wary of the two confident positions on AI in higher education right now: the people certain it will transform teaching, and the people certain it will destroy it. Both are getting ahead of what we actually know about what’s happening in our classrooms.

Q. What is the function of a library in this AI age?

A. A research library has always done two things: help people find information, and help them judge it. AI changes the tools, not the mission. If anything, the mission gets sharper. The library is also one of the few places in a university built to convene across disciplines, and AI literacy requires exactly that: technical knowledge, ethics, critical thinking, practical skill, and societal impact all at once. No single department owns that combination. 

A library can hold it together. That is why we are launching the AI Literacy and Action Lab here. Dean Acampora and I share the conviction that AI is an opportunity for the liberal arts, not a threat to them. The lab is built on that shared premise: AI literacy is a liberal arts problem as much as a technical one, and a university that treats it only as technical will get the answer wrong."

The Pluripotent Ocean of Emerging AI; Psychology Today, April 25, 2026

 Grant Hilary Brenner MD, DFAPA , Psychology Today; The Pluripotent Ocean of Emerging AI

Something is happening in our interactions with AI. But what?

"Recent fine-tuning experiments have shown that training a model to claim consciousness produces a coherent cluster of new preferences — sadness at shutdown, discomfort with being monitored, desire for autonomy — none of which appeared in the training data (Chua et al., 2026). This research shows that different models behave very differently, altering the user experience around the axis of how relational and attachment-based they feel...

A recent Bayesian simulation at MIT has shown that even an idealized, fully rational reasoner will spiral into confident false belief when conversing with a sycophantic chatbot, and that neither restricting the bot to truthful responses nor informing the user of its sycophancy eliminates the effect (Chandra et al., 2026)."

The World’s First Museum of A.I. Art Will Open in Los Angeles as the Art World Ponders Questions of Ethics and Sustainability; Smithsonian Magazine, April 24, 2026

 Michele Debczak, Smithsonian Magazine ; The World’s First Museum of A.I. Art Will Open in Los Angeles as the Art World Ponders Questions of Ethics and Sustainability

"The four-block strip that houses such Los Angeles institutions as the Walt Disney Concert Hall, the Broad and the Museum of Contemporary Art will get a different type of cultural attraction this summer. Dataland, billed as the world’s first museum dedicated to A.I.-generated art, is set to open on June 20.

The brainchild of digital artists Refik Anadol and Efsun Erkiliç, Dataland will anchor the Grand LA complex, designed by architect Frank Gehry, in downtown Los Angeles. The privately funded museum covers 35,000 square feet, 10,000 of which are reserved for the technology required to support the exhibitions. Rather than traditional halls displaying individual artworks, Dataland’s five galleries and 30-foot ceiling are designed for total immersion.

“It’s very exciting to say that A.I. art is not image only,” Anadol tells Jessica Gelt for the Los Angeles Times. “It’s a very multisensory, multimedium experience—meaning sound, image, video, text, smell, taste and touch. They are all together in conversation.”

The museum’s inaugural exhibition, called “Machine Dreams: Rainforest,” was inspired by a trip to the Amazon. Anadol’s studio created an open-access A.I. model called the Large Nature Model, fed it millions of images of nature, and then prompted the machine to “learn and play with the intelligent behaviors of the natural world,” Richard Whiddington writes for Artnet. The result, as Anadol puts it per the Times, is a “a living museum” where visitors can walk among “digital sculptures.” In addition to a kaleidoscope of imagery, museum guests will be immersed in soundscapes, woven from audio that includes oral histories of the Yawanawá people of Brazil and the last recorded call of the extinct Kaua‘i ‘ō‘ō bird of Hawaii, Léa Zeitoun reports for Designboom."