Saturday, September 13, 2025

World Meeting on Human Fraternity: Disarming words to disarm the world; Vatican News, September 13, 2025

Roberto Paglialonga, Vatican News ; World Meeting on Human Fraternity: Disarming words to disarm the world


[Kip Currier: There is great wisdom and guidance in these words from Pope Leo and Fr. Enzo Fortunato (highlighted from this Vatican News article for emphasis):

Pope Leo XIV’s words echo: ‘Before being believers, we are called to be human.’” Therefore, Fr. Fortunato concluded, we must “safeguard truth, freedom, and dignity as common goods of humanity. That is the soul of our work—not the defense of corporations or interests.”"

What is in the best interests of corporations and shareholders should not -- must not -- ever be this planet's central organizing principle.

To the contrary, that which is at the very center of our humanity -- truth, freedom, the well-being and dignity of each and every person, and prioritization of the best interests of all members of humanity -- MUST be our North Star and guiding light.]


[Excerpt]

"Representatives from the world of communication and information—directors and CEOs of international media networks— gathered in Rome for the “News G20” roundtable, coordinated by Father Enzo Fortunato, director of the magazine Piazza San Pietro. The event took place on Friday 12 September in the Sala della Protomoteca on Rome's Capitoline Hill. The participants addressed a multitude of themes, including transparency and freedom of information in times of war and conflict: the truth of facts as an essential element to “disarm words and disarm the world,” as Pope Leo XIV has said, so that storytelling and narrative may once again serve peace, dialogue, and fraternity. They also discussed the responsibility of those who work in media to promote the value of competence, in-depth reporting, and credibility in an age dominated by unchecked social media, algorithms, clickbait slogans, and rampant expressions of hatred and violence from online haters.

Three pillars of our time: truth, freedom, Dignity


In opening the workshop, Father Fortunato outlined three “pillars” that can no longer be taken for granted in our time: truth, freedom, and dignity. Truth, he said, is “too often manipulated and exploited,” and freedom is “wounded,” as in many countries around the world “journalists are silenced, persecuted, or killed.” Yet “freedom of the press should be a guarantee for citizens and a safeguard for democracy.” Today, Fr. Fortunato continued, “we have many ‘dignitaries’ but little dignity”: people are targeted by “hate and defamation campaigns, often deliberately orchestrated behind a computer screen. Words can wound more than weapons—and not infrequently, those wounds lead to extreme acts.” Precisely in a historical period marked by division and conflict, humanity—despite its diverse peoples, cultures, and opinions—is called to rediscover what unites it. “Pope Leo XIV’s words echo: ‘Before being believers, we are called to be human.’” Therefore, Fr. Fortunato concluded, we must “safeguard truth, freedom, and dignity as common goods of humanity. That is the soul of our work—not the defense of corporations or interests.”"

A.I.’s Prophet of Doom Wants to Shut It All Down; The New York Times, September 12, 2025

 , The New York Times; A.I.’s Prophet of Doom Wants to Shut It All Down

"The first time I met Eliezer Yudkowsky, he said there was a 99.5 percent chance that A.I. was going to kill me.

I didn’t take it personally. Mr. Yudkowsky, 46, is the founder of the Machine Intelligence Research Institute, a Berkeley-based nonprofit that studies risks from advanced artificial intelligence.

For the last two decades, he has been Silicon Valley’s version of a doomsday preacher — telling anyone who will listen that building powerful A.I. systems is a terrible idea, one that will end in disaster.

That is also the message of Mr. Yudkowsky’s new book, “If Anyone Builds It, Everyone Dies.” The book, co-written with MIRI’s president, Nate Soares, is a distilled, mass-market version of the case they have been making to A.I. insiders for years.

Their goal is to stop the development of A.I. — and the stakes, they say, are existential...

And what about the good things that A.I. can do? Wouldn’t shutting down A.I. development also mean delaying cures for diseases, A.I. tutors for students and other benefits?

“We totally acknowledge the good effects,” he replied. “Yep, these things could be great tutors. Yep, these things sure could be useful in drug discovery. Is that worth exterminating all life on Earth? No.”"

Is ‘The Pitt’ Really an ‘ER’ Spinoff? Michael Crichton’s Estate Says It Is.; The New York Times, September 12, 2025

, The New York Times ; Is ‘The Pitt’ Really an ‘ER’ Spinoff? Michael Crichton’s Estate Says It Is.

The estate of the best-selling author, which has intellectual property rights to “ER,” and the creators of the new hit TV show are waging a legal battle over whether it’s a stealth reboot.

"On Nov. 4, the defendants asked the court to dismiss the complaint, citing their constitutional free speech rights and arguing that “The Pitt” is not a derivative work of “ER.” They emphasized that the new series is about the post-Covid world, reminding the judge that it deals with events that arose after Mr. Crichton’s death...

Rewatching “ER” through the lens of the lawsuit — as a jury might be asked to — can be a strange experience. Is the struggle over whether to put a dying elderly patient onto a ventilator a distinct plot point (this would support Team Crichton) or an everyday tragedy in a large urban hospital (Team Pitt)?

“‘The Pitt’ has no connection to ‘ER’ — it does not use ‘ER’’s intellectual property, characters, plot, setting or narrative pacing,” the defendants said. “While both series are medical dramas set in a hospital, this concept is hardly unique.” They name-checked “Grey’s Anatomy,” “Chicago Med,” “House” and “The Good Doctor.”...

The case “could serve as a model for how to figure out how much things are worth in the streaming space,” said Jennifer Porst, a professor of media industry studies at Emory University in Atlanta."


Perplexity's definition of copyright gets it sued by the dictionary; Engadget, September 11, 2025

 Anna Washenko, Engadget; Perplexity's definition of copyright gets it sued by the dictionary

"Merriam-Webster and its parent company Encyclopedia Britannica are the latest to take on AI in court. The plaintiffs have sued Perplexity, claiming that AI company's "answer engine" product unlawfully copies their copyrighted materials. They are also alleging copyright infringement for instances where Perplexity's AI creates false or inaccurate hallucinations that it then wrongly attributes to Britannica or Merriam-Webster. The complaint, filed in New York federal court, is seeking unspecified monetary damages and an order that blocks Perplexity from misusing their content."

Friday, September 12, 2025

GPT-5’s Ethics Guidelines for Using It in Philosophical Research; Daily Nous, September 10, 2025

   

, Daily Nous; GPT-5’s Ethics Guidelines for Using It in Philosophical Research

"In a post last month, we discussed the question, “How much use of AI in our research is acceptable?...

What do you think of ChatGPT-5’s three positions regarding ethical AI use in research? Are they missing anything? Are they too demanding? Are they any good?"

Thursday, September 11, 2025

Books by Bots: Librarians grapple with AI-generated material in collections; American Libraries, September 2, 2025

 Reema Saleh  , American Libraries; Books by BotsLibrarians grapple with AI-generated material in collections

"How to Spot AI-Generated Books

Once an AI-generated book has made it to your library, it will likely give itself away with telltale signs such as jumbled, repetitive, or contradicting sentences; glaring grammatical errors or false statements; or digital art that looks too smooth around the corners.

Of course, if you can get a digital sneak-peek inside a book before ordering, all the better. But if not, how can you head off AI content so it never arrives on your desk? The following tips can help.

  • Look into who the author is and how “real” they seem, says Robin Bradford, a collection development librarian at a public library in Washington. An author with no digital footprint is a red flag, especially if they are credited with a slew of titles each year. Also a red flag: a book with no author listed at all.
  • Exercise caution regarding self-published books, small presses, or platforms such as Amazon, which filters out less AI-generated content than other vendors do.
  • Think about whether the book is capitalizing on the chance that a reader will confuse it with another, more popular book, says Jane Stimpson, a library instruction and educational technology consultant for the Massachusetts Library System. Does it have a cover similar to that of an existing bestseller? Just as animated Disney movies get imitated by low-budget knockoffs, popular titles get imitated by AI-generated books.
  • Check if there is mention of AI use in the Library of Congress record associated with the book, says Sarah Manning, a collection development librarian at Boise (Idaho) Public Library (BPL). If the book has been registered with the US Copyright Office, its record may mention AI."

Wednesday, September 10, 2025

An Essay Contest Winner Used A.I. Should She Return the $1,000 Award?; The Ethicist, The New York Times; September 10, 2025

 , The Ethicist, The New York Times ; An Essay Contest Winner Used A.I. Should She Return the $1,000 Award?

[Kip Currier: This is a thought-provoking and timely ethical dilemma, especially with the proliferation of AI into more and more aspects of our personal and professional lives.

The question posed to The Ethicist in this edition of his column is about students submitting essays for a contest. The questioner wonders if the students have used AI to write their essays. The contest winners are awarded a monetary scholarship. The questioner wonders if they should confront the winners. The beauty of this question is that we don't know for sure whether AI was or was not used. It's totally speculative. What would you do?

Does your thinking change as to whether using AI to write something is ethical or unethical if:

  • AI is used by a university professor to prepare a lecture
  • AI is used by a university professor to create an essay exam
  • AI is used by an elementary school teacher to prepare a lesson
  • AI is used by an elementary school teacher to create a multiple choice test
  • AI is used by your lawyer to write the legal brief for your lawsuit
  • AI is used by your lawyer's paralegal to write the legal brief for your lawsuit
  • AI is used to synthesize the court's verdict by the judge deciding your case
  • AI is used by a library director to compose the library's strategic plan
  • AI is used by a non-profit university to compose the university's strategic plan
  • AI is used by a for-profit company to compose the company's strategic plan
  • AI is used by a military branch to compose a strategy for military engagement
  • AI is used by a government agency to compose a strategy for national security
  • AI is used by local law enforcement to compose a strategy for public safety
  • AI is used by a summer camp to compose a strategy for camp safety
  • AI is used by your doctor to devise the treatment plan for your relative's cancer treatment
  • AI is used by a scientist to devise treatments for helping patients with cancer
  • AI is used to write a song for your significant other's birthday
  • AI is used to write a song for a musical you are creating
  • AI is used to write a song for a pharmaceutical company ad on TV
  • AI is used by your clergy head to write an annual report
  • AI is used by your clergy head to write a sermon
  • AI is used by your clergy head to write the eulogy for the funeral of one of your parents


Questions: Are you able to identify any variations in your ethical reasoning and how you decide your positions in the scenarios above?

What are you basing your decisions on? 

Are some scenarios easier or harder for you than others? If so, why?

In which situations, if any, do you think it is okay or not okay to use AI?

What additional information, if any, would you like to know that might help you to make decisions about whether and when the uses of AI are ethical or unethical?


[Excerpt]

I volunteer with our local historical society, which awards a $1,000 scholarship each year to two high school students who submit essays about a meaningful experience with a historical site. This year, our committee noticed a huge improvement in the quality of the students’ essays, and only after announcing the winners did we realize that one of them, along with other students, had almost certainly used artificial intelligence. What to do? I think our teacher liaison should be told, because A.I. is such a challenge for schools. I also feel that this winner should be confronted. If we are right, that might lead her to confess her dishonesty and return the award. — Name Withheld"

Appeals court, weighing Trump’s Library of Congress takeover, reinstates copyright chief; Politico, September 9, 2025

 , Politico ; Appeals court, weighing Trump’s Library of Congress takeover, reinstates copyright chief

"A federal appeals court ruled the nation’s top copyright official can continue serving in her post following President Donald Trump’s attempt to fire her.

A divided three-judge panel of the D.C. Circuit Court of Appeals ruled Wednesdaythat Shira Perlmutter is entitled to continue to serve as the register of copyrights at the Library of Congress, despite the White House’s claim that Trump fired her from the post in May.

While the Supreme Court and the D.C. Circuit have permitted Trump to fire a range of executive branch officials who claimed they were protected from dismissal, judges Florence Pan and J. Michelle Childs concluded that Perlmutter’s case was stronger because she doesn’t exercise significant executive power in her job.

“Because Perlmutter leads an agency that is housed in the Legislative Branch and her primary role is to advise Congress, Perlmutter’s situation differs significantly from the Executive Branch officials whose removals have been repeatedly upheld,” Pan wrote, joined by Childs. Both are appointees of former President Joe Biden.

Perlmutter was dismissed days after Trump moved to fire Librarian of Congress Carla Hayden, who has not sought to challenge her ouster in court. The president’s move onto what has traditionally been legislative branch turf has vexed Democrats and some congressional Republicans.

Judge Justin Walker, a Trump appointee, dissented. He said Perlmutter’s claims were too similar to cases the Supreme Court ruled on earlier this year where the justices upheld, for now, Trump’s power to fire members of labor-related boards and the Consumer Product Safety Commission."

Tuesday, September 9, 2025

Judge Delays Preliminary Approval in Anthropic Copyright Settlement; Publishers Weekly, September 9, 2025

 Jim Milliot, Publishers Weekly; Judge Delays Preliminary Approval in Anthropic Copyright Settlement

"Alsup signaled his discomfort with the proposal in a filing released the evening before the September 8 hearing, writing that he was “disappointed” that attorneys representing the author plaintiffs had left “important questions to be answered in the future, including respecting the Works List, Class List, Claim Form." He was especially concerned for works with multiple claimants with regards to the notification process, voicing worry over what would happen if one party wanted to opt-out of the settlement and the other did not...

In a statement, Authors Guild CEO Mary Rasenberger said the Guild was “confused” by the court’s suggestion that the Guild and AAP were working behind the scenes in ways that could pressure authors to accept the settlement “when that is precisely the opposite of our proposed role as informational advisors to the working group.”

The goal of the working group, which had been proposed by lawyers for the class, “is to ensure that authors’ interests are fully represented and to bring our expertise... to the discussions with complete transparency,” Rasenberger continued. “There are industry norms that we want to make sure are accounted for.”...

AAP CEO Maria Pallante offered an even more vigorous explanation of AAP’s role, as well as the role of the Guild, in the proceedings. “The Association of American Publishers and the Authors’ Guild are not-for-profits that have worked hard to support counsel in the case and to make sure that authors and publishers have the information they need,” Pallante said in a statement. “Unfortunately, the Court today demonstrated a lack of understanding of how the publishing industry works.”"

Monday, September 8, 2025

Faith leaders bring ethical concerns, curiosity to AI debate at multi-denominational conference; Episcopal News Service (ENS), September 5, 2025

 David Paulsen, Episcopal News Service (ENS) ; Faith leaders bring ethical concerns, curiosity to AI debate at multi-denominational conference

"Some of the most tech-forward minds in the Protestant church gathered here this week at the Faithful Futures conference, where participants wrestled with the ethical, practical and spiritual implications of artificial intelligence. The Episcopal Church is one of four Protestant denominations that hosted the Sept. 2-5 conference. About halfway through, one of the moderators acknowledged that AI has advanced so far and so rapidly that most conferences on AI are no longer focused just on AI...

AI raises spiritual questions over what it means to be human

Much of the conference seemed to pivot on questions that defied easy answers. In an afternoon session Sept. 3, several church leaders who attended last year’s Faithful Futures conference in Seattle, Washington, were invited to give 10-minute presentations on their preferred topics.

“What happens to theology when the appearance of intelligence is no longer uniquely human?” said the Rev. Michael DeLashmutt, a theology professor at General Theological Seminary in New York, New York, who also serves as the Episcopal seminary’s senior vice president.

DeLashmutt argued that people of faith, in an era of AI, must not forget what it means to be Christian and to be human. “Being human means being relational, embodied, justice-oriented and open to God’s spirit,” he said. “So, I think the real risk is not that machines will become human, but that we will forget the fullness of what humanity actually is.”

Kip Currier, a computing and information professor at the University of Pittsburgh, warned that AI is being used by sports betting platforms to appeal to gamblers, including those suffering from addiction. Mark Douglas, an ethics professor at Columbia Theological Seminary, outlined the ecological impact of AI data centers, which need to consume massive amounts of energy and water.

The Rev. Andy Morgan, a Presbyterian pastor based in Knoxville, Tennessee, described himself as his denomination’s “unofficial AI person” and suggested that preachers should not be afraid of using AI to improve their sermons – as long as they establish boundaries to prevent delegating too much to the technology."

Class-Wide Relief:The Sleeping Bear of AI Litigation Is Starting to Wake Up; Intellectual Property & Technology Law Journal, October 2025

Anna B. Naydonov, Mark Davies and Jules Lee, Intellectual Property &Technology Law Journal; Class-Wide Relief:The Sleeping Bear of AI Litigation Is Starting to Wake Up

"Probably no intellectual property (IP) topic in the last several years has gotten more attention than the litigation over the use of the claimed copyrighted content in training artificial intelligence (AI) models.The issue of whether fair use applies to save the day for AI developers is rightfully deemed critical, if not existential, for AI innovation. But whether class relief – and the astronomical damages that may come with it – is available in these cases is a question of no less significance."

Saturday, September 6, 2025

Big Questions About AI and the Church Video; August 25, 2025

Big Questions About AI and the Church Video

Kip Currier: This Big Questions About AI and the Church video (1:12:14) was created by the members of my cohort and me (Cohort 7). Our cohort emanated from the groundbreaking August 2024 ecumenical AI & The Church Summit in Seattle that we all attended.

Perhaps raising more questions than providing answers, the video's aim is to encourage reflection and discussion of the many-faceted issues and concerns at the nexus of AI, faith communities, and our broader societies.

Many thanks to our cohort member Rev. Dr. Andy P. Morgan for spearheading, synthesizing, and uploading this video to YouTube. 

Anthropic settles with authors in first-of-its-kind AI copyright infringement lawsuit; NPR, September 5, 2025

  , NPR; Anthropic settles with authors in first-of-its-kind AI copyright infringement lawsuit

"In one of the largest copyright settlements involving generative artificial intelligence, Anthropic AI, a leading company in the generative AI space, has agreed to pay $1.5 billion to settle a copyright infringement lawsuit brought by a group of authors.

If the court approves the settlement, Anthropic will compensate authors around $3,000 for each of the estimated 500,000 books covered by the settlement.

The settlement, which U.S. Senior District Judge William Alsup in San Francisco will consider approving next week, is in a case that involved the first substantive decision on how fair use applies to generative AI systems. It also suggests an inflection point in the ongoing legal fights between the creative industries and the AI companies accused of illegally using artistic works to train the large language models that underpin their widely-used AI systems.

The fair use doctrine enables copyrighted works to be used by third parties without the copyright holder's consent in some circumstances, such as when illustrating a point in a news article. AI companies trying to make the case for the use of copyrighted works to train their generative AI models commonly invoke fair use. But authors and other creative industry plaintiffs have been pushing back.

"This landmark settlement will be the largest publicly reported copyright recovery in history," the settlement motion states, arguing that it will "provide meaningful compensation" to authors and "set a precedent of AI companies paying for their use of pirated websites."

"This settlement marks the beginning of a necessary evolution toward a legitimate, market-based licensing scheme for training data," said Cecilia Ziniti, a tech industry lawyer and former Ninth Circuit clerk who is not involved in this specific case but has been following it closely. "It's not the end of AI, but the start of a more mature, sustainable ecosystem where creators are compensated, much like how the music industry adapted to digital distribution.""

Monday, September 1, 2025

Time; TIME100 AI 2025

 Time; TIME100 AI 2025

Meet the innovators, leaders, and thinkers reshaping our world through groundbreaking advances in artificial intelligence.

Saturday, August 30, 2025

DHS references Mexican IndyCar driver to promote ‘Speedway Slammer’ detention center; The Guardian, August 7, 2025

 Agencies , The Guardian; DHS references Mexican IndyCar driver to promote ‘Speedway Slammer’ detention center


[Kip Currier: Not only is this statement by a DHS spokesperson factually inaccurate, as there's a cogent argument these actions by DHS may negatively impact trademark rights (and rights of publicity) -- “An AI generated image of a car with ‘ICE’ on the side does not violate anyone’s intellectual property rights" -- it's also morally offensive to either recklessly or intentionally appropriate without permission the racing number of one of the top Mexican drivers for use in a DHS promotion that demeans human beings.]


[Excerpt]

"IndyCar driver Pato O’Ward and series officials were shocked by a social media post from the Department of Homeland Security that touts plans for an immigration detention center in Indiana dubbed “Speedway Slammer.” It includes a car with the same number as that of O’Ward, the only Mexican driver in the series.

“It caught a lot of people off guard. Definitely caught me off guard,” O’Ward said Wednesday. “I was just a little bit shocked at the coincidences of that and, you know, of what it means ... I don’t think it made a lot of people proud, to say the least.”

The post on Tuesday included an AI-generated image of a IndyCar-style vehicle with O’Ward’s No 5 that has “ICE” stamped on it. In the image, the car is in front of a jail...

“We were unaware of plans to incorporate our imagery as part of yesterday’s announcement,” IndyCar said in a statement Wednesday. “Consistent with our approach to public policy and political issues, we are communicating our preference that our IP not be utilized moving forward in relation to this matter.”

A DHS spokesperson said it would not change the social media post. “An AI generated image of a car with ‘ICE’ on the side does not violate anyone’s intellectual property rights. Any suggestion to the contrary is absurd,” the spokesperson said in statement. “DHS will continue promoting the ‘Speedway Slammer’ as a comprehensive and collaborative approach to combatting illegal immigration.”

Anthropic’s settlement with authors may be the ‘first domino to fall’ in AI copyright battles; Fortune, August 27, 2025

BEATRICE NOLAN, Fortune; Anthropic’s settlement with authors may be the ‘first domino to fall’ in AI copyright battles

"The amount of the settlement was not immediately disclosed, but legal experts not involved in the case said the figure could easily reach into the hundreds of millions. It’s also still unclear how the settlement will be distributed among various copyright holders, which could include large publishing houses as well as individual authors.

The case was the first certified class action against an AI company over the use of copyrighted materials, and the quick settlement, which came just one month after the judge ruled the case could proceed to trial as a class action, is a win for the authors, according to legal experts."

Friday, August 29, 2025

Medicare Will Require Prior Approval for Certain Procedures; The New York Times, August 28, 2025

 Reed Abelson and  , The New York Times; Medicare Will Require Prior Approval for Certain Procedures


[Kip Currier: Does anyone who receives Medicare -- or cares about someone who does -- really think that letting AI make "prior approvals" for any Medicare procedures is a good thing?

Read the entire article, but just the money quote below should give any thinking person heart palpitations about this AI Medicare pilot project's numerous red flags and conflicts of interest...]


[Excerpt]

"The A.I. companies selected to oversee the program would have a strong financial incentive to deny claims. Medicare plans to pay them a share of the savings generated from rejections."

Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors; Wired, August 26, 2025

Kate Knobs, Wired ; Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors

"ANTHROPIC HAS REACHED a preliminary settlement in a class action lawsuit brought by a group of prominent authors, marking a major turn in one of the most significant ongoing AI copyright lawsuits in history. The move will allow Anthropic to avoid what could have been a financially devastating outcome in court."

Thursday, August 28, 2025

Anthropic’s surprise settlement adds new wrinkle in AI copyright war; Reuters, August 27, 2025

 , Reuters; Anthropic’s surprise settlement adds new wrinkle in AI copyright war

"Anthropic's class action settlement with a group of U.S. authors this week was a first, but legal experts said the case's distinct qualities complicate the deal's potential influence on a wave of ongoing copyright lawsuits against other artificial-intelligence focused companies like OpenAI, Microsoft and Meta Platforms.

Amazon-backed Anthropic was under particular pressure, with a trial looming in December after a judge found it liable for pirating millions of copyrighted books. The terms of the settlement, which require a judge's approval, are not yet public. And U.S. courts have just begun to wrestle with novel copyright questions related to generative AI, which could prompt other defendants to hold out for favorable rulings."

Think you actually own all those movies you’ve been buying digitally? Think again; The Guardian, August 27, 2025

 , The Guardian; Think you actually own all those movies you’ve been buying digitally? Think again


[Kip Currier: This article underscores why the First Sale Doctrine (Section 109a) of the U.S. Copyright Statute is such a boon for consumers and public libraries: when you (or a library) buy a physical book, you actually do own that physical book (though the copyright to that book remains with the copyright holder, which is an important distinction to remember).

The First Sale Doctrine is what enables a library to purchase physical books and then lend them to as many borrowers as it wants. Not so for digital books, which are generally licensed by publishers to users and libraries who pay for licenses to those digital books.

The bottom line: You as a digital content licensee only retain access to the digital items you license, so long as the holder of that license -- the licensor -- says you may have access to its licensed content.

This distinction between physical and digital content has put great pressure on library budgets to provide users with access to electronic resources, while libraries face ever-increasing fees from licensors. This fiscally-fraught environment has been exacerbated by Trump 2.0's dismantling of IMLS (Institute of Museum and Library Services) grants that supported the licensing of ebooks and audiobooks by libraries. Some states have said "enough" and are attempting to rebalance what some see as an unequal power dynamic between publishers and libraries/users. See "Libraries Pay More for E-Books. Some States Want to Change That. Proposed legislation would pressure publishers to adjust borrowing limits and find other ways to widen access." New York Times (July 16, 2025)]


[Excerpt]

"Regardless of whether the lawsuit is ultimately successful, it speaks to a real problem in an age when people access films, television series, music and video games through fickle online platforms: impermanence. The advent of streaming promised a world of digital riches in which we could access libraries of our favorite content whenever we wanted. It hasn’t exactly worked out that way...

The problem is that you aren’t downloading the movie, to own and watch forever; you’re just getting access to it on Amazon’s servers – a right that only lasts as long as Amazon also has access to the film, which depends on capricious licensing agreements that vary from title to title. A month or five years from now, that license may expire – and the movie will disappear from your Amazon library. Yet the $14.99 you paid does not reappear in your pocket."

Wednesday, August 27, 2025

Trump is targeting several Smithsonian artworks. Here they are.; The Washington Post, August 26, 2025

 

The Washington Post; Trump is targeting several Smithsonian artworks. Here they are.


[Kip Currier: Donald Trump and his administration's efforts to remove, revise, and erase artistic and historical content are the opposite of free speech and intellectual freedom. Art should challenge us to think and feel in new ways. We as individuals are certainly free to like a piece of art, hate it, or everything in between on the spectrum of how we feel about it. But the federal (or state) government should not be controlling access to art and suppressing or falsely presenting history in a free democracy. That's what authoritarians and dictators do in non-democratic nations like Russia, China, and North Korea.

If you don't like a particular painting, book, or movie, you can simply walk away from that painting, not read that book, or not watch that movie. But it isn't your right to stop everyone from seeing art, reading books, and watching films. To paraphrase the late Robert Croneberger, Director of the venerable Carnegie Library of Pittsburgh and a prolific proponent of intellectual freedom, a library isn't doing its job if it doesn't have at least one item that offends each person.

Similarly, museums, like libraries in healthy democracies, are not meant to reflect a compulsory unitary state viewpoint. We're not the Star Trek Borg Collective where everyone must think alike and individuality is verboten. The mantra of the Borg is that Resistance is Futile. Fortunately, we know that resistance is not futile: we can continue to resist efforts to sanitize art, literature, culture, and history. Exercise your right to consume what you want and disregard what you don't want. But don't tell everyone what they can and can't choose to view and read. That's undemocratic and un-American.]


[Excerpt]

"When the White House posted an article condemning a long list of Smithsonian content last week, it pointed to several specific artworks, a sampling that underlined the kind of material that could be targeted by a president who is increasingly interested in influencing what Americans see in public museums.

The list also criticized Smithsonian exhibition texts, learning materials, past performances and the institution for previously flying the intersex-inclusive Pride flag. This month, President Donald Trump said White House officials were conducting a review of the Smithsonian Institution — months after he signed an executive order seeking to root out “anti-American ideology” in the museum and research complex, an effort that experts say would amount to censorship.

The pieces are an eclectic bunch, united mainly by the Trump administration’s public criticism of them. Not all the artworks are currently on view at the museums. Taken together, they tell a story of a White House that is sensitive to imagery that appears to contradict its messaging, whether it shows a transgender woman cast as the Statue of Liberty or a boy peering over the Southern border...

Here is a look at the artworks named by the White House as evidence that Trump is “right” about the Smithsonian — and how several of the artists have responded."

Tuesday, August 26, 2025

Can you copyright artwork made using AI?; NPR, August 25, 2025

Kate Concannon, NPR; Can you copyright artwork made using AI?

"Copyright is the legal system used to reward and protect creations made by humans. But with growing adoption of artificial intelligence, does copyright extend to artwork that’s made using AI? Today on the show, how a test case over a Vincent Van Gogh mashup is testing the boundaries of copyright law."

Monday, August 25, 2025

New Ruling Makes Old Postings a New Copyright Problem; Lexology, August 21, 2025

Gordon Feinblatt LLC, Lexology; New Ruling Makes Old Postings a New Copyright Problem

 "Go through your website and delete any old photos and music you do not own or have a license to use. Every company and internet user should be diligent and not post any images, music, or other content unless they are certain they have the rights to do so. Lesson number one is: do not merely pluck material from the internet and use it for yourself.

Lesson number two is that copyright owners can now look back more than three years and obtain damages for a long history of infringement. Last year the Supreme Court ruled that copyright damages can reach back to the beginning of the infringing usage so long as a copyright infringement case is properly brought. This means that historical usage of infringing works can rack up many years of damages."

Medical triage as an AI ethics benchmark; Nature, August 22, 2025

 

, Nature; Medical triage as an AI ethics benchmark

"We present the TRIAGE benchmark, a novel machine ethics benchmark designed to evaluate the ethical decision-making abilities of large language models (LLMs) in mass casualty scenarios. TRIAGE uses medical dilemmas created by healthcare professionals to evaluate the ethical decision-making of AI systems in real-world, high-stakes scenarios. We evaluated six major LLMs on TRIAGE, examining how different ethical and adversarial prompts influence model behavior. Our results show that most models consistently outperformed random guessing, with open source models making more serious ethical errors than proprietary models. Providing guiding ethical principles to LLMs degraded performance on TRIAGE, which stand in contrast to results from other machine ethics benchmarks where explicating ethical principles improved results. Adversarial prompts significantly decreased accuracy. By demonstrating the influence of context and ethical framing on the performance of LLMs, we provide critical insights into the current capabilities and limitations of AI in high-stakes ethical decision making in medicine."

How ChatGPT Surprised Me; The New York Times, August 24, 2025

, The New York Times ; How ChatGPT Surprised Me

"In some corners of the internet — I’m looking at you, Bluesky — it’s become gauche to react to A.I. with anything save dismissiveness or anger. The anger I understand, and parts of it I share. I am not comfortable with these companies becoming astonishingly rich off the entire available body of human knowledge. Yes, we all build on what came before us. No company founded today is free of debt to the inventors and innovators who preceded it. But there is something different about inhaling the existing corpus of human knowledge, algorithmically transforming it into predictive text generation and selling it back to us. (I should note that The New York Times is suing OpenAI and its partner Microsoft for copyright infringement, claims both companies have denied.)

Right now, the A.I. companies are not making all that much money off these products. If they eventually do make the profits their investors and founders imagine, I don’t think the normal tax structure is sufficient to cover the debt they owe all of us, and everyone before us, on whose writing and ideas their models are built...

As the now-cliché line goes, this is the worst A.I. will ever be, and this is the fewest number of users it will have. The dependence of humans on artificial intelligence will only grow, with unknowable consequences both for human society and for individual human beings. What will constant access to these systems mean for the personalities of the first generation to use them starting in childhood? We truly have no idea. My children are in that generation, and the experiment we are about to run on them scares me."

Who owns the copyright for AI work?; Financial Times, August 24, 2025

 , Financial Times; Who owns the copyright for AI work?

"Generative artificial intelligence poses two copyright puzzles. The first is the widely discussed question of compensation for work used to train AI models. The second, which has yet to receive as much attention, concerns the work that AI produces. Copyright is granted to authors. So what happens to work that has no human author?"

Sunday, August 24, 2025

Suetopia: Generative AI is a lawsuit waiting to happen to your business; The Register, August 12, 2025

 Adam Pitch, The Register ; Suetopia: Generative AI is a lawsuit waiting to happen to your business

"More and more US companies are using generative AI as a way to save money they might otherwise pay creative professionals. But they're not thinking about the legal bills.

You could be asking an AI to create public-facing communications for your company, such as a logo, promotional copy, or an entire website. If those materials happen to look like copyrighted works, you may be hearing from a lawyer.

"It's pretty clear that if you create something that's substantially similar to a copyrighted work that an infringement has occurred, unless it's for a fair use purpose," said Kit Walsh, the Electronic Frontier Foundation's Director of AI and Access-to-Knowledge Legal Projects."

Using AI for Work Could Land You on the Receiving End of a Nasty Lawsuit; Futurism, August 23, 2025

 JOE WILKINS , Futurism; Using AI for Work Could Land You on the Receiving End of a Nasty Lawsuit

"For all its hype, artificial intelligence isn't without its psychologicalenvironmental, and even spiritual hazards.

Perhaps the most pressing concern on an individual level, though, is that it puts users on the hook for a nearly infinite number of legal hazards — even at work, as it turns out.


A recent breakdown by The Register highlights the legal dangers of AI use, especially in corporate settings. If you use generative AI software to spit out graphics, press releases, logos, or videos, you and your employer could end up facing six-figure damages, the publication warns.


This is thanks to the vast archive of copyrighted data that virtually all commercial generative AI models are trained on.


The Register uses Nintendo's Mario as a prime example of how one might stumble, intentionally or not, into a massive copyright lawsuit, regardless of intent to infringe: if you use AI to generate a cutesy mascot for your plumbing company that looks too much like the iconic videogame character, you could easily find yourself in the legal crosshairs of the notoriously litigious corporation.


"The real harm comes from the attorney's fees that you can get saddled with," intellectual property lawyer Benjamin Bedrava told the publication. "Because you could have a hundred and fifty thousand dollars in attorney's fees over something where the license would have been fifteen hundred dollars.""