pull down to refresh

At their core, LLMs are just statistical models. They don't really have a concept of right or wrong information, just likely patterns of words.
And what is intelligent about that?
reply
On one level, not much. On another level you could argue it's how most (in)intelligent humans operate - biases and assumptions and fuzzy patterns and what "sounds good"
reply