pull down to refresh

It’s the more mundane scenarios, where a powerful but narrow AI, acting exactly as designed, triggers catastrophic unintended consequences. Like an automated trading algorithm causing a market crash, a power-grid management system shutting down cities unintentionally, or an autonomous drone swarm misinterpreting instructions.
Much like a powerful human would. So the question is: how do you defend against risks like these now that power can - theoretically - be scaled?