Showing posts with label agentic AI. Show all posts
Showing posts with label agentic AI. Show all posts

Thursday, March 12, 2026

Autonomous AI Agents Have an Ethics Problem; Undark, March 5, 2026

 , Undark; Autonomous AI Agents Have an Ethics Problem

AI-powered digital assistants can do many complex tasks on their own. But who takes responsibility when they cause harm?

"As a bioethicist and specialist in neurointensive care, I deal directly with human moral agency and the essence of personhood when treating patients. As a researcher, I study the use of synthetic personas animating AI agents and their use as stand-ins of human counterparts. Here is the problem that I see: Granting AI personhood, even in limited capacity, risks formalizing the most dangerous escape hatch of the agentic era — what I will call responsibility laundering. This allows us to say, “It wasn’t me. The agent/bot/system did it.”

Personhood should not be about metaphysics or claims about an inner nature. It is a legal and ethical instrument that allocates rights and accountability. It is a social technology for assigning standing, duties, and limits on what can be done to an entity. If we grant personhood to systems that can act persuasively in public while remaining functionally unaccountable, we create a new class of actors whose harms are everyone’s problem but nobody’s fault.

There is a key concept here that we can use from my field, medicine. In clinical ethics, some decisions are justified yet still leave a “moral residue,” a kind of emotional echo or sense of responsibility that persists after the action because no options fully satisfy competing obligations. This residue accumulates over time, causing a “crescendo effect” that occurs even when conscientious clinicians are doing their best inside imperfect systems. That remainder matters because it reveals something basic about moral life, namely that ethics is not only about choosing; it is about owning what remains afterwards."

Wednesday, November 26, 2025

What Is Agentic A.I., and Would You Trust It to Book a Flight?; The New York Times, November 25, 2025

 , The New York Times ; What Is Agentic A.I., and Would You Trust It to Book a Flight?

"A bot may soon be booking your vacation.

Millions of travelers already use artificial intelligence to compare options for flights, hotels, rental cars and more. About 30 percent of U.S. travelers say they’re comfortable using A.I. to plan a trip. But these tools are about to take a big step.

Agentic A.I., a rapidly emerging type of artificial intelligence, will be able to find and pay for reservations with limited human involvement, developers say. Companies like Expedia, Google, Kayak and Priceline are experimenting with or rolling out agentic A.I. tools.

Travelers using agentic A.I. would set parameters like dates and a price range for their travel plans, then hand over their credit card information to the bot, which would monitor prices and book on their behalf...

Think of agentic A.I. as a personal assistant, said Shilpa Ranganathan, the chief product officer at Expedia Group, which is developing both generative and agentic A.I. trip-planning tools.

While the more familiar generative A.I. can summarize information and answer questions, agentic tools can carry out tasks. Travelers benefit by deputizing these tools to perform time-consuming chores like tracking flight prices."

Wednesday, July 16, 2025

The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies; Task & Purpose, July 14, 2025

  , Task & Purpose; The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies

"The Pentagon announced Monday it is going to spend almost $1 billion on “agentic AI workflows” from four “frontier AI” companies, including Elon Musk’s xAI, whose flagship Grok appeared to still be declaring itself “MechaHitler” as late as Monday afternoon.

In a press release, the Defense Department’s Chief Digital and Artificial Intelligence Office — or CDAO — said it will cut checks of up to $200 million each to tech giants Anthropic, Google, OpenAI and Musk’s xAI to work on:

  • “critical national security challenges;”
  • “joint mission essential tasks in our warfighting domain;”
  • “DoD use cases.”

The release did not expand on what any of that means or how AI might help. Task & Purpose reached out to the Pentagon for details on what these AI agents may soon be doing and asked specifically if the contracts would include control of live weapons systems or classified information."