Picture the scene: Two fictional nuclear powers, Cold War-ish capabilities, and a crisis unfolding. Perhaps it’s a competition for vital but scarce resources, or a standoff over some disputed territory. Or even the slow burn of a fragmenting alliance exploited by a malevolent third party. We’ve seen human leaders confront this sort of thing, and recently. But how might today’s leading Large Language Models get on, and why would we care?
I’ve just published a study of today’s models navigating just this sort of terrain. The results are sobering. I also think they have implications that go far beyond national security. That’s because I was interested not only in understanding what the models decided to do, but why.
Curious? Read on…
...read more at kcl.ac.uk
pull down to refresh
related posts