pull down to refresh

All these things are trained (aligned) to a certain response due to "safety". The only reason why alignment is safety is because it's emulation of intelligence, not real intelligence. So it makes sense that models that aren't trained with GPT as a judge of what is socially aligned (Claude, Mistral) respond differently.