pull down to refresh
100 sats \ 11 replies \ @optimism 13 Oct \ on: On the Inevitability of Left-Leaning Political Bias in Aligned Language Models AI
There are moments where I find left/right polarization extremely confusing, and this is definitely one of them. I think this is because it's an excessive reduction in dimensions to human thought. Even on a 2-dimensional scale of left/right and authoritarian/libertarian, the results from taking those political alignment tests always astound me.
If we assert that LLMs, that operate over thousands of dimensions have a one-dimensional guideline then I think there's a problem, especially if that were true. I'd instead assert that this is more human desire to simplify complex systems, both AI and society, into a labelled, organized system.
But reality is much more complex than a one-dimensional measure can possibly dream to represent and I think this goes for AI and humans both.
I think you're right. In a world as divided as ours, both sides are always gonna try to pull things their way and bash the other. I'm not sure if any bias an LLM has is intentional — if it is, that’s bad. But if it’s not on purpose, then I guess it just comes from the training data, which naturally leans one way or another. I’m not saying that bias is good or bad, just that it’s kinda "natural."
reply
I think that at this point, the training data (for the big LLMs) is all-encompassing and what you find in chatbot interaction is the result of training more than base language 1.
I do think that bias is there because of this, but I agree with the author on the part where the training of an LLM is on-purpose and thus bias is a desired outcome; I do however disagree with then trying to one-dimensionally express the result on a scale of left-to-right. I also think that the most fit-for-purpose LLMs are those that are at least tuned to some level of specialization, and that means that you need bias.
Some of the bias is probably intentional. Grok apparently had to filter out some nazi crap under public pressure. I think that if we'd less ascribe personality to LLMs, and thus also train it less to simulate having a personality, we'd also need less filtering for bias. I guess my ideal LLM is the opposite of the robotic persona that all the Big Tech is now pitching.
Footnotes
-
I may be mistaken in this because I'm not at a lab training LLMs, but this is how I interpret the post-o1 era we're in now, where reinforcement learning is ultimately what makes it tick in terms of instruction following. ↩
reply
The nature of political polarization is that almost everything becomes politically coded. There may be no good reason for various opinions to be clustered together, but they are.
So, when there's an extreme censorship campaign against one side, as there clearly was, the available training data will be biased towards the side that wasn't censored.
reply
reply
I'm not even talking about the specifics of what was included or excluded for the purpose of training.
We had an intense decade-long period of big tech censorship online. If these models are training on what's available online, that is a very biased dataset and there's no way to include the missing stuff because people began self-censoring to avoid being demonitized.
reply
reply
That doesn't mean much to me.
I'm talking about bias in the information produced for and available to the world. It's not about some specific training set. There's no available unbiased dataset.
reply
What I'm saying is that these bots aren't as much influenced by the dataset that they're ingesting (literally pirated libraries), as by the follow-up training where it is adjusted so that it answers questions correctly.
Some obscure offensive answer popping up is mostly when it doesn't do what it's trained to do, no matter what you find offensive. The general alignment to human speech and thus, most of the bias, is because that is how they tuned it.
Yeah, I agree. Specialized LLMs are obviously gonna have some bias, that’s kind of the whole point, right? But general ones should stay neutral (or 'natural' bias), especially on key topics. I don’t really know how you’re supposed to properly check an LLM for bias, but I get that it’s not easy, and it’s definitely not fair or accurate to just say “it’s biased this way or that way.” It’s way more complicated than that, for sure.
reply
It's very hard and a guessing game, though "unlearning bias" has been done to an extent. See for example https://erichartford.com/uncensored-models
reply