Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Thursday, March 12, 2026

Autonomous AI Agents Have an Ethics Problem; Undark, March 5, 2026

 , Undark; Autonomous AI Agents Have an Ethics Problem

AI-powered digital assistants can do many complex tasks on their own. But who takes responsibility when they cause harm?

"As a bioethicist and specialist in neurointensive care, I deal directly with human moral agency and the essence of personhood when treating patients. As a researcher, I study the use of synthetic personas animating AI agents and their use as stand-ins of human counterparts. Here is the problem that I see: Granting AI personhood, even in limited capacity, risks formalizing the most dangerous escape hatch of the agentic era — what I will call responsibility laundering. This allows us to say, “It wasn’t me. The agent/bot/system did it.”

Personhood should not be about metaphysics or claims about an inner nature. It is a legal and ethical instrument that allocates rights and accountability. It is a social technology for assigning standing, duties, and limits on what can be done to an entity. If we grant personhood to systems that can act persuasively in public while remaining functionally unaccountable, we create a new class of actors whose harms are everyone’s problem but nobody’s fault.

There is a key concept here that we can use from my field, medicine. In clinical ethics, some decisions are justified yet still leave a “moral residue,” a kind of emotional echo or sense of responsibility that persists after the action because no options fully satisfy competing obligations. This residue accumulates over time, causing a “crescendo effect” that occurs even when conscientious clinicians are doing their best inside imperfect systems. That remainder matters because it reveals something basic about moral life, namely that ethics is not only about choosing; it is about owning what remains afterwards."

An Artist Renounced His Family. They Sued to Acquire His Life’s Work.; The New York Times, March 11, 2026

Arthur Lubow , The New York Times; An Artist Renounced His Family. They Sued to Acquire His Life’s Work. 

A settlement is reached in the case of Mike Disfarmer, who renounced his family. Decades later they sued to take back his life’s work. When heirs battle the people who built their legacies, the art may be at stake.

"Art scholars and experts on intellectual property law say the litigation over the Disfarmer archive poses consequential ethical and legal questions, among them: Who should manage the estate of an artist who dies without a will? Heirs who hardly knew him — or outsiders, including museums, who built and conserved the estates that are now worth fighting over?

The Disfarmer litigation raises some of the same issues — and indeed, involves some of the same players — as the lawsuits initiated by families of two other reclusive American artists who died without wills: Vivian Maier and Henry Darger, who both lived in Chicago. All three were unrecognized during their lifetimes and out of touch with their relatives. When their estates belatedly became valuable, distant cousins stepped up to demand their rights. The law would dictate the outcome. But some question whether the law always serves an artist’s best interests."

Tuesday, March 3, 2026

Fans value ethics over innovation at AI hologram concerts, new study finds; Phys.org, March 3, 2026

  , Phys.org; Fans value ethics over innovation at AI hologram concerts, new study finds

"The recent success of the ABBA Voyage virtual reunion tour and the Tupac hologram at Coachella show how audiences embrace these performances as opportunities to relive shared cultural milestones.

However, little is known about how consumers perceive the uniqueness, nostalgia and ethicality of holographic AI concerts, and how these perceptions translate into emotional and social values.

"Ethics is not optional—it's definitely strategic," said researcher Seden Dogan, assistant professor of instruction in the USF School of Hospitality and Sport Management. "When using technologies like holograms or AI to recreate past artists, ethical responsibility matters more than novelty alone."

Dogan is the lead author of the paper, "Reviving legends through holographic AI event experiences: Consumer acceptance and value insights," recently published in the International Journal of Contemporary Hospitality Management.

"Audiences care more about whether the holographic performance felt respectful and morally appropriate than about how innovative or memory-evoking it was," Dogan said."

The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’; The Conversation, March 1, 2026

 Lecturer, International Relations, Deakin University, The Conversation ; The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’


"In the leadup to the weekend’s US and Israeli attacks on Iran, the US Department of Defense was locked in tense negotiations with artificial intelligence (AI) company Anthropic over exactly how the Pentagon could use the firm’s technology.

Anthropic wanted guarantees its Claude systems would not be used for purposes such as domestic surveillance in the US and operating autonomous weapons without human control. 

In response, US president Donald Trump on Friday directed all US federal agencies to cease using Anthropic’s technology, saying he would “never allow a radical left, woke company to dictate how our great military fights and wins wars!”

Hours later, rival AI lab OpenAI (maker of ChatGPT) announced it had struck its own deal with the Department of Defense. The key difference appears to be that OpenAI permits “all lawful uses” of its tools, without specifying ethical lines OpenAI won’t cross.

What does this mean for military AI? Is it the end for the idea of “ethical AI” in warfare?"

Monday, March 2, 2026

'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military; TechRadar,March 1, 2026

  , TechRadar ; 'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military

"After Claude developer Anthropic walked away from a deal with the US Department of War over safety and security concerns, OpenAI has decided to sign an agreement with the military – and ChatGPT users are far from happy about it.

As reported by Windows Central, a growing number of people are canceling their ChatGPT subscriptions and switching to other AI chatbots instead, including Claude. A quick browse of social media or Reddit is enough to see that there's a growing backlash to the move.

Some Redditors are posting guides to extracting yourself and your data from ChatGPT, while others are accusing OpenAI of having "no ethics at all" and "selling their soul" by agreeing to allow their AI models to be used by the US military complex."

Wednesday, February 18, 2026

Trump Wants His Name Trademarked For Airports—Raising Questions About Profiteering; Forbes, February 18, 2026

Suzanne Rowan Kelleher , Forbes; Trump Wants His Name Trademarked For Airports—Raising Questions About Profiteering

"President Trump’s private company has filed for trademarks for airports using his name—setting up the possibility he could profit from what has historically been an honor in name only—just as plans take flight for an airport near his Florida home to be renamed after him."

Dinner Is Being Recorded, Whether You Know It or Not; The New York Times, February 16, 2026

  , The New York Times; Dinner Is Being Recorded, Whether You Know It or Not

"To be in public is to risk being filmed. And these days, there’s a good chance it’s happening surreptitiously with smart glasses. Their wearers are filming in restaurants, cafes and bars, capturing warped, eye-level video of drive-through pranks, Michelin-starred meals and work shifts at Texas Roadhouse. Servers, owners and customers can end up as captive participants...

Filming in public spaces is broadly protected by the First Amendment. Some states, including California and Pennsylvania, have two-party consent laws that prohibit recording without express permission, but enforcing them hinges on whether someone has a “reasonable expectation of privacy” in a given setting, said Aaron Krowne, a New York City lawyer specializing in privacy and civil liberties. Restaurants fall in a legal gray area: They are privately owned, but open to anyone who walks in...

The responsibility of using these devices ethically falls largely on the wearer."

Tuesday, February 17, 2026

The economics of AI outweigh ethics for tech CEOs, business leader says; CNN, February 16, 2026

 CNN; The economics of AI outweigh ethics for tech CEOs, business leader says

"Podcast host and business leader Scott Galloway joins Dana Bash on "Inside Politics" to discuss the need for comprehensive government regulation of AI. “We have increasingly outsourced our ethics, our civic responsibility, what is good for the public to the CEOs of companies of tech," Galloway tells Bash, adding, "This is another example of how government is failing to step in and provide thoughtful, sensible regulations.” His comments come as the Pentagon confirms it's reviewing a contract with AI company Anthropic after a reported clash over the scope of AI guardrails."

Tuesday, January 27, 2026

A Lecture on Faith, Ethics and Artificial Intelligence; Episcopal News Service, Lecture: Saturday, March 7, 11 AM EST

Episcopal News Service; A Lecture on Faith, Ethics and Artificial Intelligence

"Join Grace Church Brooklyn Heights as we welcome Dr. Isaac B. Sharp for a lecture on faith, ethics and artificial intelligence addressing the question: What does Christian Ethics have to say about the promises and pitfalls of artificial intelligence, engaging questions of justice, agency and moral responsibility? The lecture will take place on Saturday, March 7th at Grace Church (254 Hicks Street, Brooklyn, NY 11201) at 11am. A light lunch will be provided. Please click here to register. For more information, please email The Rev. Leandra Lisa Lambert at LLambert@gracebrooklyn.org

Tuesday, December 23, 2025

Lawmaker Sues to Remove Trump’s Name From the Kennedy Center; The New York Times, December 22, 2025

  , The New York Times; Lawmaker Sues to Remove Trump’s Name From the Kennedy Center

"Representative Joyce Beatty, Democrat of Ohio, sued President Trump on Monday, seeking to force the removal of his name from the John F. Kennedy Center for the Performing Arts.

Ms. Beatty’s lawsuit names as defendants Mr. Trump and the loyalists he appointed to the center’s board. The suit contends that the board’s vote to change the name last week was illegal because an act of Congress is required to rename the building.

Ms. Beatty is represented by Norman Eisen, a White House ethics counsel in the Obama administration, along with Nathaniel Zelinsky, his co-counsel of the Washington Litigation Group.

Mr. Eisen said the name change “violates the Constitution and the rule of law because Congress said this is the name. He doesn’t have a right to change the name.”"

The Benign Zombies of Pluribus; The Hastings Center for Bioethics, December 22, 2025

 Jonathan D. Moreno, The Hastings Center for Bioethics; The Benign Zombies of Pluribus

"Whatever disagreements neuroethicists have, they all presuppose the annoying multiplicity of brains that somehow generate minds. Not so in Vince Galligan’s new streaming series Pluribus.

A coded message from deep space is the trigger for turning (nearly) all human beings into segments of a “hive mind,” a global super colony that the sociobiologist E.O. Wilson would have recognized from his work on ants. And wouldn’t you know it, those nerdy scientists hanging on every radio impulse from the universe in search of intelligent life provide the gateway to a radical loss of individuality. Thus the SETI geeks enter a long tradition of fictional scientists who unleash forces that quickly run out of control.

The results are mixed: No war, no violence, no racism or sexism.  Also, no personal uniqueness. Is it worth it?...

Like all such speculations Pluribus raises countless questions. How did this actually happen and why, if one member of the hive gets drunk, the whole hive doesn’t?"

Monday, December 22, 2025

Natasha Lyonne says AI has an ethics problem because right now it’s ‘super kosher copacetic to rob freely under the auspices of acceleration’; Fortune, December 20, 2025

  , Fortune; Natasha Lyonne says AI has an ethics problem because right now it’s ‘super kosher copacetic to rob freely under the auspices of acceleration’

"Asteria partnered with Moonvalley AI, which makes AI tools for filmmakers, to create Marey, named after cinematographer Étienne-Jules Marey. The tool helps generate AI video that can be used for movies and TV, but only draws on open-license content or material it has explicit permission to use. 

Being careful about the inputs for Asteria’s AI video generation is important, Lyonne said at the Fortune Brainstorm AI conference in San Francisco last week. As AI use increases, both tech and Hollywood need to respect the work of the cast, as well as the crew and the writers behind the scenes. 

“I don’t think it’s super kosher copacetic to just kind of rob freely under the auspices of acceleration or China,” she said. 

While she hasn’t yet used AI to help make a TV show or movie, Lyonne said Asteria has used it in other small ways to develop renderings and other details.

“It’s a pretty revolutionary act that we actually do have that model and that’s you know the basis for everything that we work on,” said Lyonne.

Marey is available to the public for a credits-based subscription starting at $14.99 per month."

Thursday, December 11, 2025

AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures; International Business Times, December 11, 2025

Lisa Parlagreco, International Business Times; AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures

"When segments of our profession begin treating AI outputs as inherently reliable, we normalize a lower threshold of scrutiny, and the law cannot function on lowered standards. The justice system depends on precision, on careful reading, on the willingness to challenge assumptions rather than accept the quickest answer. If lawyers become comfortable skipping that intellectual step, even once, we begin to erode the habits that make rigorous advocacy possible. The harm is not just procedural; it's generational. New lawyers watch what experienced lawyers do, not what they say, and if they see shortcuts rewarded rather than corrected, that becomes the new baseline.

This is not to suggest that AI has no place in law. When used responsibly, with human oversight, it can be a powerful tool. Legal teams are successfully incorporating AI into tasks like document review, contract analysis, and litigation preparation. In complex cases with tens of thousands of documents, AI has helped accelerate discovery and flag issues that humans might overlook. In academia as well, AI has shown promise in grading essays and providing feedback that can help educate the next generation of lawyers, but again, under human supervision.

The key distinction is between augmentation and automation. We must not be naive about what AI represents. It is not a lawyer. It doesn't hold professional responsibility. It doesn't understand nuance, ethics, or the weight of a client's freedom or financial well-being. It generates outputs based on patterns and statistical likelihoods. That's incredibly useful for ideation, summarization, and efficiency, but it is fundamentally unsuited to replace human reasoning.

To ignore this reality is to surrender the core values of our profession. Lawyers are trained not just to know the law but to apply it with judgment, integrity, and a commitment to truth. Practices that depend on AI without meaningful human oversight communicate a lack of diligence and care. They weaken public trust in our profession at a time when that trust matters more than ever.

We should also be thinking about how we prepare future lawyers. Law schools and firms must lead by example, teaching students not just how to use AI, but how to question it. They must emphasize that AI outputs require verification, context, and critical thinking. AI should supplement legal education, not substitute it. The work of a lawyer begins long before generating a draft; it begins with curiosity, skepticism, and the courage to ask the right questions.

And yes, regulation has its place. Many courts and bar associations are already developing guidelines for the responsible use of AI. These frameworks encourage transparency, require lawyers to verify any AI-assisted research, and emphasize the ethical obligations that cannot be delegated to a machine. That's progress, but it needs broader adoption and consistent enforcement.

At the end of the day, technology should push us forward, not backward. AI can make our work more efficient, but it cannot, and should not, replace our judgment. The lawyer who delegates their thinking to an algorithm risks their profession, their client's case, and the integrity of the justice system itself."

Tuesday, December 2, 2025

The case of the fake references in an ethics journal; Retraction Watch, December 2, 2025

 Retraction Watch ; The case of the fake references in an ethics journal

"Many would-be whistleblowers write to us about papers with nonexistent references, possibly hallucinated by artificial intelligence. One reader recently alerted us to fake references in … an ethics journal. In an article about whistleblowing.

The paper, published in April in the Journal of Academic Ethics, explored “the whistleblowing experiences of individuals with disabilities in Ethiopian public educational institutions.” 

Erja Moore, an independent researcher based in Finland, came across the article while looking into a whistleblowing case in that country. “I started reading this article and found some interesting references that I decided to read as well,” Moore told Retraction Watch. “To my surprise, those articles didn’t exist.”...

The Journal of Academic Ethics is published by Springer Nature. Eleven of the fabricated references cite papers in the Journal of Business Ethics — another Springer Nature title.

“On one hand this is hilarious that an ethics journal publishes this, but on the other hand it seems that this is a much bigger problem in publishing and we can’t really trust scientific articles any more,” Moore said."

Wednesday, November 26, 2025

AI, ethics, and the lawyer's duty after Noland v. Land of the Free; Daily Journal, November 24, 2025

 Reza Torkzadeh, Daily Journal; AI, ethics, and the lawyer's duty after Noland v. Land of the Free

"Noland establishes a bright line for California lawyers. AI may assist with drafting or research, but it does not replace judgment, verification or ethical responsibility. Technology may change how legal work is produced -- it does not change who is accountable for it."

Tuesday, November 18, 2025

Student cheating dominates talk of generative AI in higher ed, but universities and tech companies face ethical issues too; The Conversation, November 17, 2025

 Professor of Sociology, College of the Holy Cross , The Conversation; Student cheating dominates talk of generative AI in higher ed, but universities and tech companies face ethical issues too

"Debates about generative artificial intelligence on college campuses have largely centered on student cheating. But focusing on cheating overlooks a larger set of ethical concerns that higher education institutions face, from the use of copyrighted material in large language models to student privacy.

As a sociologist who teaches about AI and studies the impact of this technology on work, I am well acquainted with research on the rise of AI and its social consequences. And when one looks at ethical questions from multiple perspectives – those of students, higher education institutions and technology companies – it is clear that the burden of responsible AI use should not fall entirely on students’ shoulders.

I argue that responsibility, more generally, begins with the companies behind this technology and needs to be shouldered by higher education institutions themselves."

The Unraveling of the Justice Department: Sixty attorneys describe a year of chaos and suspicion.; The New York Times Magazine, November 16, 2025

 Emily Bazelon and Rachel Poser, Photographs by Stephen Voss , The New York Times Magazine; The Unraveling of the Justice Department: Sixty attorneys describe a year of chaos and suspicion.

President Trump’s second term has brought a period of turmoil and controversy unlike any in the history of the Justice Department. Trump and his appointees have blasted through the walls designed to protect the nation’s most powerful law enforcement agency from political influence; they have directed the course of criminal investigations, openly flouted ethics rules and caused a breakdown of institutional culture. To date, more than 200 career attorneys have been fired, and thousands more have resigned. (The Justice Department says many of them have been replaced.)

What was it like inside this institution as Trump’s officials took control? It’s not an easy question to answer. Justice Department norms dictate that career attorneys, who are generally nonpartisan public servants, rarely speak to the press. And the Trump administration’s attempts to crack down on leaks have made all federal employees fearful of sharing information.

But the exodus of lawyers has created an opportunity to understand what’s happening within the agency. We interviewed more than 60 attorneys who recently resigned or were fired from the Justice Department. Much of what they told us is reported here for the first time..."

"“They didn’t want the ethics office calling them up and telling them what to do.” Joseph Tirrell, former director of the Departmental Ethics Office"

Friday, November 14, 2025

Cleveland attorney’s use of AI in court filings raises ethical questions for legal profession; Cleveland.com, November 12, 2025

 

, Cleveland.com; Cleveland attorney’s use of AI in court filings raises ethical questions for legal profession

"A Cleveland defense attorney is under scrutiny in two counties after submitting court filings containing fabrications generated by artificial intelligence — a case that’s prompting broader questions about how lawyers are ethically navigating the use of AI tools in legal practice.

William Norman admitted that a paralegal in his office used ChatGPT to draft a motion to reopen a murder conviction appeal. The document included quotes that did not exist in the trial transcript and misrepresented statements made by the prosecutor."