pull down to refresh

CrowdStrike researchers noted that if DeepSeek were asked to write code for an industrial system, 23% of responses contained bugs. But if they added that the code was needed by ISIS*, the error rate increased to 42%.
In projects for Tibet and Taiwan, code quality dropped even further, and sometimes the model simply refused to respond.
CrowdStrike suggests several explanations: either DeepSeek is toeing the party line and is deliberately harming the system, or the model was trained on weak code from these regions, or the AI ​​itself "decided" to downgrade the quality of its responses.
*a terrorist organization banned in many countries