Showing posts with label scenario planning. Show all posts
Showing posts with label scenario planning. Show all posts

Saturday, May 16, 2026

What Are Your Company’s AI Nightmares?; Harvard Business Review, May 11, 2026

, Harvard Business Review; What Are Your Company’s AI Nightmares?

"Before generative AI burst onto the scene in late 2022, companies took a more or less standard approach to managing the risks introduced by AI: They developed AI ethical risk (or Responsible AI or AI Governance) programs. These programs were designed by executives and focused primarily on writing and implementing enterprise-wide AI policies that are meant to explain how the organization will live up to its AI ethics values (or principles or pillars, as they are also called). When generative AI showed up, organizations updated their programs to accommodate the new technology. Now that AI agents are gaining traction, most will likely try to update yet again.

That would be a mistake. The standard approach to Responsible AI is fundamentally broken. 

I do not come to this conclusion lightly. It is the result of, first, seeing how the AI landscape has evolved in ways that create a diabolically complex risk landscape, and second, spending nearly a decade working with Fortune 500 companies across healthcare, pharmaceuticals, insurance, financial services, entertainment, and more to design and implement AI ethical risk programs. I’ve also worked in an advisory capacity with three of the largest consultancies in the world. I’ve had countless closed-door conversations with other leaders in the AI governance space.

The standard approach is too slow, too vague, and too hard to communicate. Instead of focusing on values and policy, companies would be better served by focusing on their worst-case scenarios—their AI ethical nightmares. That’s because this focus allows them to apply a novel, rapidly implementable approach that works for everything from narrow AI to governing AI agents."