pull down to refresh

The AI industry risks repeating the same cultural failures that contributed to the Space Shuttle Challenger disaster: Quietly normalizing warning signs while progress marches forward.
The original term Normalization of Deviance comes from the American sociologist Diane Vaughan, who describes it as the process in which deviance from correct or proper behavior or rule becomes culturally normalized.
I use the term Normalization of Deviance in AI to describe the gradual and systemic over-reliance on LLM outputs, especially in agentic systems.
At its core, large language models (LLMs) are unreliable (and untrusted) actors in system design.
This means that security controls (access checks, proper encoding, and sanitization, etc.) must be applied downstream of LLM output.
A constant stream of indirect prompt injection exploit demonstrations indicates that system designers and developers are either unaware of this or are simply accepting the deviance. It is particularly dangerous when vendors make insecure decisions for their userbase by default.
I first learned about this concept in the context of the Space Shuttle Challenger disaster, where systemic normalization of warnings led to tragedy.
Despite data showing erosion in colder temperatures, the deviation from safety standards was repeatedly rationalized because previous flights had succeeded. The absence of disaster was mistaken for the presence of safety.
33 sats \ 0 replies \ @freetx 5 Dec
I don't really disagree with the underlying point, but have lots of small disagreements otherwise:
  • The "normalization of deviance" is a direct result of the "Loss of God" of secular society. There is no way to have one without the other...its baked into the cake.
  • In a very very analogous way the "safety of LLMs" is a mirror of this...A LLM is just statistical token generation machine, its going to generate output based upon its training data. If you want a LLM to be "realistic" then you need to train it on realistic human conversation, which will include unsafe thoughts. Its unfortunately again, baked into the cake.
I'm not meaning this to be picky of the the researcher, but would she agree to live in a non-secular society? Being a professor I somehow doubt it.
reply