pull down to refresh
The nature of political polarization is that almost everything becomes politically coded. There may be no good reason for various opinions to be clustered together, but they are.
So, when there's an extreme censorship campaign against one side, as there clearly was, the available training data will be biased towards the side that wasn't censored.
reply
reply
I'm not even talking about the specifics of what was included or excluded for the purpose of training.
We had an intense decade-long period of big tech censorship online. If these models are training on what's available online, that is a very biased dataset and there's no way to include the missing stuff because people began self-censoring to avoid being demonitized.
reply
reply
That doesn't mean much to me.
I'm talking about bias in the information produced for and available to the world. It's not about some specific training set. There's no available unbiased dataset.
reply
What I'm saying is that these bots aren't as much influenced by the dataset that they're ingesting (literally pirated libraries), as by the follow-up training where it is adjusted so that it answers questions correctly.
Some obscure offensive answer popping up is mostly when it doesn't do what it's trained to do, no matter what you find offensive. The general alignment to human speech and thus, most of the bias, is because that is how they tuned it.
reply
Ok, fair enough. I'd be surprised if there weren't substantial viewpoint bias amongst the trainers, too.
Just about everyone with an advanced degree comes from the establishment left.
reply
Yeah, I agree. Specialized LLMs are obviously gonna have some bias, that’s kind of the whole point, right? But general ones should stay neutral (or 'natural' bias), especially on key topics. I don’t really know how you’re supposed to properly check an LLM for bias, but I get that it’s not easy, and it’s definitely not fair or accurate to just say “it’s biased this way or that way.” It’s way more complicated than that, for sure.
reply
It's very hard and a guessing game, though "unlearning bias" has been done to an extent. See for example https://erichartford.com/uncensored-models
reply
Footnotes