Showing posts with label corporate liability for AI agent actions. Show all posts
Showing posts with label corporate liability for AI agent actions. Show all posts

Tuesday, August 5, 2025

We need a new ethics for a world of AI agents; Nature, August 4, 2025

 

 Nature; We need a new ethics for a world of AI agents

"Artificial intelligence (AI) developers are shifting their focus to building agents that can operate independently, with little human intervention. To be an agent is to have the ability to perceive and act on an environment in a goal-directed and autonomous way1. For example, a digital agent could be programmed to browse the web and make online purchases on behalf of a user — comparing prices, selecting items and completing checkouts. A robot with arms could be an agent if it could pick up objects, open doors or assemble parts without being told how to do each step...

The rise of more-capable AI agents is likely to have far-reaching political, economic and social consequences. On the positive side, they could unlock economic value: the consultancy McKinsey forecasts an annual windfall from generative AI of US$2.6 trillion to $4.4 trillion globally, once AI agents are widely deployed (see go.nature.com/4qeqemh). They might also serve as powerful research assistants and accelerate scientific discovery.

But AI agents also introduce risks. People need to know who is responsible for agents operating ‘in the wild’, and what happens if they make mistakes. For example, in November 2022 , an Air Canada chatbot mistakenly decided to offer a customer a discounted bereavement fare, leading to a legal dispute over whether the airline was bound by the promise. In February 2024, a tribunal ruled that it was — highlighting the liabilities that corporations could experience when handing over tasks to AI agents, and the growing need for clear rules around AI responsibility.

Here, we argue for greater engagement by scientists, scholars, engineers and policymakers with the implications of a world increasingly populated by AI agents. We explore key challenges that must be addressed to ensure that interactions between humans and agents — and among agents themselves — remain broadly beneficial."