pull down to refresh

I love how you say wtf to autocorrect.
When I literally wrote the correct word, it says, "No, wrong," then tells me the correct word is actually the exact same word that I said before, what else is there to say?
reply
It's because you were on random seed 0xf8018980dff9558e44e99ac1 which masked the path to "Yes, correct".
reply
Umm...I'm sure you know, but I have no idea what that means. :)
reply
Since LLMs are deterministic (it has an absolute set of weights that doesn't get updated) there are some randomizers involved to make chatbots not repeat themselves and make them more "human".
So for example, it checks how often it said "yes", and if it matches some threshold it will not say "yes" again. To make it even more "human", all these thresholds are dynamic, and the window in which it is evaluated is often dynamic too.
Most of this is controlled by temperature, which "globally" scales how much randomness is used. Lower values allow for less randomness. You may want to play around with this (I'm quite sure I've seen that in Venice chat settings.)
Depending on your model used, there are recommended values. If these don't work for your use-case because of too many hallucinations, try lowering them. I.e if you have 0.5 now, try 0.45 or 0.4.
reply