A decision foreclosing nonhuman authorship for Stephen Thaler’s Creativity Machine didn’t conflict with any in other circuits or raise complicated questions about protections for artificial intelligence-assisted work by human authors, the Jan. 23 filing said."
My Bloomsbury book "Ethics, Information, and Technology" was published on Nov. 13, 2025. Purchases can be made via Amazon and this Bloomsbury webpage: https://www.bloomsbury.com/us/ethics-information-and-technology-9781440856662/
Tuesday, January 27, 2026
High Court Shouldn’t Weigh AI’s Copyright Author Status, US Says; Bloomberg Law, January 26, 2026
A Lecture on Faith, Ethics and Artificial Intelligence; Episcopal News Service, Lecture: Saturday, March 7, 11 AM EST
Episcopal News Service; A Lecture on Faith, Ethics and Artificial Intelligence
"Join Grace Church Brooklyn Heights as we welcome Dr. Isaac B. Sharp for a lecture on faith, ethics and artificial intelligence addressing the question: What does Christian Ethics have to say about the promises and pitfalls of artificial intelligence, engaging questions of justice, agency and moral responsibility? The lecture will take place on Saturday, March 7th at Grace Church (254 Hicks Street, Brooklyn, NY 11201) at 11am. A light lunch will be provided. Please click here to register. For more information, please email The Rev. Leandra Lisa Lambert at LLambert@gracebrooklyn.org"
Monday, January 26, 2026
Behind the Curtain: Anthropic's warning to the world; Axios, January 26, 2026
Jim VandeHei, Mike Allen, Axios; Behind the Curtain: Anthropic's warning to the world
"Anthropic CEO Dario Amodei, the architect of the most powerful and popular AI system for global business, is warning of the imminent "real danger" that super-human intelligence will cause civilization-level damage absent smart, speedy intervention.
- In a 38-page essay, shared with us in advance of Monday's publication, Amodei writes: "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species."
- "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it."
Why it matters: Amodei's company has built among the most advanced LLM systems in the world.
- Anthropic's new Claude Opus 4.5 and coding and Cowork tools are the talk of Silicon Valley and America's C-suites.
- AI is doing 90% of the computer programming to build Anthropic's products, including its own AI.
Amodei, one of the most vocal moguls about AI risk, worries deeply that government, tech companies and the public are vastly underestimating what could go wrong. His memo — a sequel to his famous 2024 essay, "Machines of Loving Grace: How AI Could Transform the World for the Better" — was written to jar others, provoke a public debate and detail the risks.
- Amodei insists he's optimistic that humans will navigate this transition — but only if AI leaders and government are candid with people and take the threats more seriously than they do today.
Amodei's concerns flow from his strong belief that within a year or two, we will face the stark reality of what he calls a "country of geniuses in a datacenter.""
Search Engines, AI, And The Long Fight Over Fair Use; Electronic Frontier Foundation (EFF), January 23, 2026
JOE MULLIN , Electronic Frontier Foundation (EFF); Search Engines, AI, And The Long Fight Over Fair Use
"We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.
Long before generative AI, copyright holders warned that new technologies for reading and analyzing information would destroy creativity. Internet search engines, they argued, were infringement machines—tools that copied copyrighted works at scale without permission. As they had with earlier information technologies like the photocopier and the VCR, copyright owners sued.
Courts disagreed. They recognized that copying works in order to understand, index, and locate information is a classic fair use—and a necessary condition for a free and open internet.
Today, the same argument is being recycled against AI. It’s whether copyright owners should be allowed to control how others analyze, reuse, and build on existing works."
Sunday, January 25, 2026
How researchers got AI to quote copyrighted books word for word; Le Monde, January 24, 2026
Nicolas Six , Le Monde; How researchers got AI to quote copyrighted books word for word
"Where does artificial intelligence acquire its knowledge? From an enormous trove of texts used for training. These typically include vast numbers of articles from Wikipedia, but also a wide range of other writings, such as the massive Books3 dataset, which aggregates nearly 200,000 books without the authors' permission. Some proponents of conversational AI present these training datasets as a form of "universal knowledge" that transcends copyright law, adding that, protected or not, AIs do not memorize these works verbatim and only store fragmented information.
This argument has been challenged by a series of studies, the latest of which, published in early January by researchers at Stanford University and Yale University, is particularly revealing. Ahmed Ahmed and his coauthors managed to prompt four mainstream AI programs, disconnected from the internet to ensure no new information was retrieved, to recite entire pages from books."
Saturday, January 24, 2026
Copyright Office Doubles Down on AI Authorship Stance in the Midjourney Case; The Fashion Law (TFL), January 23, 2026
TFL, The Fashion Law (TFL); Copyright Office Doubles Down on AI Authorship Stance in the Midjourney Case
"The U.S. Copyright Office is standing firm in its position that works generated by artificial intelligence (“AI”), even when refined or curated by a human user, do not qualify for copyright protection unless the human author clearly limits their claim to their own original contributions. In a newly filed response and cross-motion for summary judgment, the Office is asking a federal court in Colorado to deny artist Jason Allen’s motion for summary judgment and uphold its refusal to register the work at issue, arguing that the dispute turns on the Copyright Act’s long-established human authorship requirement and not hostility toward AI."
Jimmy Kimmel slams FCC after chair demands changes to talk shows: 'Doing everything he can to shut us up'; Entertainment Weekly, January 23, 2026
Wesley Stenzel, Entertainment Weekly; Jimmy Kimmel slams FCC after chair demands changes to talk shows: 'Doing everything he can to shut us up'
"Jimmy Kimmel is fighting with the FCC once again.
The comedian used his Jimmy Kimmel Live monologue on Thursday night to draw attention to the organization's chair, Brendan Carr, after he demanded that talk shows like Live and The View give an equal amount of time to political candidates on both sides of the aisle.
"[Carr] is doing everything he can to shut us up, the easy way or the hard way," Kimmel said in the monologue.
The FCC released new guidance on Wednesday that argued that talk shows and late-night shows should not be considered "bona fide" news programs, and thus should not continue to receive exemption from the equal time rule. That night, President Trump posted a link to a Los Angeles Times article with the caption, "FCC takes aim at The View, Jimmy Kimmel Live in fight over ‘equal time’ rules for politicians."
The new guidance effectively states that shows like Jimmy Kimmel Liveand The Late Show With Stephen Colbert must now give equal time to all political candidates running for the same office.
"They're reinterpreting long-agreed-upon rules to stifle us," Kimmel said. "And this one's a little bit complicated. It's not as easy as what happened the last time. So I want to break it down.""
Philadelphia sues US government for removal of slavery-related exhibit; The Guardian, January 23, 2026
Marina Dunbar , The Guardian; Philadelphia sues US government for removal of slavery-related exhibit
"Philadelphia is taking legal action against the Trump administration following the National Park Service’s decision to dismantle a long-established slavery-related exhibit at Independence National Historical park, which holds the former residence of George Washington.
The city filed its lawsuit in federal court on Thursday, naming the US Department of Interior and its secretary, Doug Burgum, the National Park Service, and its acting director, Jessica Bowron, as defendants. The lawsuit seeks a court order requiring the exhibits to be restored while the case proceeds.
The display stood at the President’s House site, once home to George Washington and John Adams, and included information recognizing people enslaved by Washington, along with a broader chronology of slavery in the US...
The Democratic governor of Pennsylvania, Josh Shapiro, sharply criticized the decision to take down the signs, arguing that Trump “will take any opportunity to rewrite and whitewash our history”
“But he picked the wrong city – and he sure as hell picked the wrong Commonwealth,” Shapiro added in a message posted on X. “We learn from our history in Pennsylvania, even when it’s painful.”...
Congress had encouraged the National Park Service in 2003 to formally acknowledge the enslaved people who lived and worked at the President’s House. The lawsuit states that in 2006, the city and the agency agreed to collaborate on creating an exhibit for the site, which opened in 2010 with a memorial and informational panels focused on slavery.
The removal of the exhibit is part of a broader effort by the Trump administration to eliminate cultural content that does not align with his policy agenda."
Friday, January 23, 2026
Beckham family tensions put spotlight on celebrity trademark disputes; Reuters, January 23, 2026
Paul Sandle and Marie-louise Gumuchian, Reuters; Beckham family tensions put spotlight on celebrity trademark disputes
How the National Park Service Is Deleting American History; The New York Times, January 23, 2026
Maxine Joselow and Jennifer Schuessler, The New York Times; How the National Park Service Is Deleting American History
[Kip Currier: Trump 2.0's ongoing efforts to censor and erase history and science are appallingly Orwellian, yet also childishly regressive and unevolved.
When this modern Dark Age of willful ignorance and information suppression has passed, the uncomfortable truths, silenced voices, and inescapable facts will need to be restored to our collective historical record and cultural heritage institutions.]
[Excerpt]
"At Independence National Historical Park in Philadelphia, the Trump administration took down an exhibit on the contradiction between President George Washington’s enslavement of people and the Declaration of Independence’s promise of liberty.
At Muir Woods National Monument in California, the administration dismantled a plaque about how the tallest trees on the planet could help store carbon dioxide and slow the Earth’s dangerous warming.
And at Lowell National Historical Park in Massachusetts, Trump officials ordered the National Park Service to stop showing films about the women and immigrants who once toiled in the city’s textile mills.
Across the country, Park Service workers have started taking down plaques, films and other materials in connection with a directive from President Trump to remove or rewrite content that may “disparage Americans” or promote “corrosive ideology.”"
Actors And Musicians Help Launch “Stealing Isn’t Innovation” Campaign To Protest Big Tech’s Use Of Copyrighted Works In AI Models; Deadline, January 22, 2026
Ted Johnson , Deadline; Actors And Musicians Help Launch “Stealing Isn’t Innovation” Campaign To Protest Big Tech’s Use Of Copyrighted Works In AI Models
"A long list of musicians, content creators and actors are among those who have signed on to a new campaign to protest tech giants’ use of copyrighted works in their AI models.
The list of signees includes actors like Scarlett Johansson and Cate Blanchett, music groups like REM and authors like Brad Meltzer.
The ‘Stealing Isn’t Innovation” campaign is being led by the Human Artistry Campaign. It states that “respect and protect” the Creative community, “some of the biggest tech companies, many backed by private equity and other funders, are using American creators’ work to build AI platforms without authorization or regard for copyright law.”"
Copyright Law Set to Govern AI Under Trump’s Executive Order; Bloomberg Law, January 23, 2026
Michael McLaughlin , Bloomberg Law; Copyright Law Set to Govern AI Under Trump’s Executive Order
[Kip Currier: I posted this Bloomberg Law article excerpt to the Canvas site for the graduate students in my Intellectual Property and Open Movements course this term, along with the following note:
Copyright law is the potential giant-slayer vis-a-vis AI tech companies that have used copyrighted works as AI training data, without permission or compensation.
Information professionals who have IP acumen (e.g. copyright law and fair use familiarity) will have vital advantages on the job market and in their organizations.]
[Excerpt]
"The legal landscape for artificial intelligence is entering a period of rapid consolidation. With President Donald Trump’s executive order in December 2025 establishing a national AI framework, the era of conflicting state-level rules may be drawing to a close.
But this doesn’t signal a reduction in AI-related legal risk. It marks the beginning of a different kind of scrutiny—one centered not on regulatory innovation but on the most powerful legal instrument already available to federal courts: copyright law.
The lesson emerging from recent AI litigation, most prominently Bartz v. Anthropic PBC, is that the greatest potential liability to AI developers doesn’t come from what their models generate. It comes from how those models were trained, and from the provenance of the content used in that training.
As the federal government asserts primacy over AI governance, the decisive question will be whether developers can demonstrate that their training corpora were acquired lawfully, licensed appropriately (unless in the public domain), and documented thoroughly."
Anthropic’s Claude AI gets a new constitution embedding safety and ethics; CIO, January 22, 2026
John E. Dunn, CIO; Anthropic’s Claude AI gets a new constitution embedding safety and ethics
"Anthropic has completely overhauled the “Claude constitution”, a document that sets out the ethical parameters governing its AI model’s reasoning and behavior.
Launched at the World Economic Forum’s Davos Summit, the new constitution’sprinciples are that Claude should be “broadly safe” (not undermining human oversight), “Broadly ethical” (honest, avoiding inappropriate, dangerous, or harmful actions), “genuinely helpful” (benefitting its users), as well as being “compliant with Anthropic’s guidelines”.
According to Anthropic, the constitution is already being used in Claude’s model training, making it fundamental to its process of reasoning.
Claude’s first constitution appeared in May 2023, a modest 2,700-word document that borrowed heavily and openly from the UN Universal Declaration of Human Rights and Apple’s terms of service.
While not completely abandoning those sources, the 2026 Claude constitution moves away from the focus on “standalone principles” in favor of a more philosophical approach based on understanding not simply what is important, but why.
“We’ve come to believe that a different approach is necessary. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize — to apply broad principles rather than mechanically following specific rules,” explained Anthropic."
It Makes Sense That People See A.I. as God; The New York Times, January 23, 2026
Joseph Bernstein , The New York Times; It Makes Sense That People See A.I. as God
"More and more, when it comes to our relationships with A.I. and the complex algorithms that shape so much of our modern subjectivity, we have slipped into the language and habits of mind we normally reserve for deities. And even people who do not make an explicit connection between A.I. and religion engage a kind of religious mode around the new technology."
Wednesday, January 21, 2026
Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss; The Guardian, January 21, 2026
John Collingridge and Graeme Wearden , The Guardian; Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss
"Jamie Dimon, the boss of JP Morgan, has said artificial intelligence “may go too fast for society” and cause “civil unrest” unless governments and business support displaced workers.
While advances in AI will have huge benefits, from increasing productivity to curing diseases, the technology may need to be phased in to “save society”, he said...
Jensen Huang, the chief executive of the semiconductor maker Nvidia, whose chips are used to power many AI systems, argued that labour shortages rather than mass payoffs were the threat.
Playing down fears of AI-driven job losses, Huang told the meeting in Davos that “energy’s creating jobs, the chips industry is creating jobs, the infrastructure layer is creating jobs … jobs, jobs, jobs”...
Huang also argued that AI robotics was a “once-in-a-generation” opportunity for Europe, as the region had an “incredibly strong” industrial manufacturing base."
They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.; The Washington Post, January 20, 2026
Danielle Abril, The Washington Post; They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.
"Artificial intelligence is supposed to make your work easier. But figuring out how to use it effectively can be a challenge.
Over the past several years, AI models have continued to evolve, with plenty of tools for specific tasks such as note-taking, coding and writing. Many workers spent last year experimenting with AI, applying various tools to see what actually worked. And as employers increasingly emphasize AI in their business, they’re also expecting workers to know how to use it...
The number of people using AI for work is growing, according to a recent poll by Gallup. The percentage of U.S. employees who used AI for their jobs at least a few times a year hit 45 percent in the third quarter of last year, up five percentage points from the previous quarter. The top use cases for AI, according to the poll, was to consolidate information, generate ideas and learn new things.
The Washington Post spoke to workers to learn how they’re getting the best use out of AI. Here are five of their best tips. A caveat: AI may not be suitable for all workers, so be sure to follow your company’s policy."
Tuesday, January 20, 2026
FREE WEBINAR: REGISTER: AI, Intellectual Property and the Emerging Legal Landscape; National Press Foundation, Thursday, January 22, 2026
National Press Foundation; REGISTER: AI, Intellectual Property and the Emerging Legal Landscape
"Artificial intelligence is colliding with U.S. copyright law in ways that could reshape journalism, publishing, software, and the creative economy.
The intersection of AI and intellectual property has become one of the most consequential legal battles of the digital age, with roughly 70 federal lawsuits filed against AI companies and copyright claims on works ranging from literary and visual work to music and sound recording to computer programs. Billions of dollars are at stake.
Courts are now deciding what constitutes “fair use,” whether and how AI companies may use copyrighted material to build models, what licensing is required, and who bears responsibility when AI outputs resemble protected works. The legal decisions will shape how news, art, and knowledge are produced — and who gets paid for them.
To help journalists better understand and report on the developing legal issues of AI and IP, join the National Press Foundation and a panel of experts for a wide-ranging discussion around the stakes, impact and potential solutions. Experts in technology and innovation as well as law and economics join journalists in this free online briefing 12-1 p.m. ET on Thursday, January 22, 2026."
AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up; Australian Broadcasting Corporation, January 18, 2026
Alan Kohler, Australian Broadcasting Corporation; AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up
"As 2025 began, I thought humanity's biggest problem was climate change.
In 2026, AI is more pressing...
Musk's xAI and the other intelligence developers are working as fast as possible towards what they call AGI (artificial general intelligence) or ASI (artificial superintelligence), which is, in effect, AI that makes its own decisions. Given its answer above, an ASI version of Grok might decide not to do non-consensual porn, but others will.
Meanwhile, photographic and video evidence in courts will presumably become useless if they can be easily faked. Many courts are grappling with this already, including the Federal Court of Australia, but it could quickly get out of control.
AI will make politics much more chaotic than it already is, with incredibly effective fake campaigns including damning videos of candidates...
But AI is not like the binary threat of a nuclear holocaust — extinction or not — its impact is incremental and already happening. The Grok body fakes are known about, and the global outrage has apparently led to some controls on it for now, but the impact on jobs and the economy is completely unknown and has barely begun."
Monday, January 19, 2026
AI companies will fail. We can salvage something from the wreckage; The Guardian, January 18, 2026
Cory Doctorow, The Guardian; AI companies will fail. We can salvage something from the wreckage
"The growth narrative of AI is that AI will disrupt labor markets. I use “disrupt” here in its most disreputable tech-bro sense.
The promise of AI – the promise AI companies make to investors – is that there will be AI that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself and give the other half to the AI company.
That is the $13tn growth story that Morgan Stanley is telling. It’s why big investors are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family’s financial security.
Now, if AI could do your job, this would still be a problem. We would have to figure out what to do with all these unemployed people.
But AI can’t do your job. It can help you do your job, but that does not mean it is going to save anyone money...
After more than 20 years of being consistently wrong and terrible for artists’ rights, the US Copyright Office has finally done something gloriously, wonderfully right. All through this AI bubble, the Copyright Office has maintained – correctly – that AI-generated works cannot be copyrighted, because copyright is exclusively for humans. That is why the “monkey selfie” is in the public domain. Copyright is only awarded to works of human creative expression that are fixed in a tangible medium.
And not only has the Copyright Office taken this position, they have defended it vigorously in court, repeatedly winning judgments to uphold this principle.
The fact that every AI-created work is in the public domain means that if Getty or Disney or Universal or Hearst newspapers use AI to generate works – then anyone else can take those works, copy them, sell them or give them away for nothing. And the only thing those companies hate more than paying creative workers, is having other people take their stuff without permission...
AI is a bubble and bubbles are terrible.
Bubbles transfer the life savings of normal people who are just trying to have a dignified retirement to the wealthiest and most unethical people in our society, and every bubble eventually bursts, taking their savings with it."
Sunday, January 18, 2026
Publishers seek to join lawsuit against Google over AI training; Reuters, January 15, 2026
Blake Brittain , Reuters; Publishers seek to join lawsuit against Google over AI training
"Publishers Hachette Book Group and Cengage Group asked a California federal court on Thursday for permission to intervene in a proposed class action lawsuit against Google over the alleged misuse of copyrighted material used to train its artificial intelligence systems.
The publishers said in their proposed complaint that the tech company "engaged in one of the most prolific infringements of copyrighted materials in history" to build its AI capabilities, copying content from Hachette books and Cengage textbooks without permission...
The lawsuit currently involves groups of visual artists and authors who sued Google for allegedly misusing their work to train its generative AI systems. The case is one of many high-stakes lawsuits brought by artists, authors, music labels and other copyright owners against tech companies over their AI training."