TLDR; lls can't do random which is surprising considering that they are inherently probabilistic and the "temperature" is the noise in that randomness
pull down to refresh
pull down to refresh
TLDR; lls can't do random which is surprising considering that they are inherently probabilistic and the "temperature" is the noise in that randomness
They introduce a factor of randomness, but are always steered by whatever was said before. If you ask people a random number between 1 and 100, they have been shown to have a strong bias towards random numbers, with 37 being an even more frequent answer. As LLMs have been trained on human data, I'm not too surprised they display a similar bias.
I'd be happy to hear thoughts from someone who better knows how the randomness is introduced to possibly counter such obvious bias.