pull down to refresh

What I'm saying is that these bots aren't as much influenced by the dataset that they're ingesting (literally pirated libraries), as by the follow-up training where it is adjusted so that it answers questions correctly.
Some obscure offensive answer popping up is mostly when it doesn't do what it's trained to do, no matter what you find offensive. The general alignment to human speech and thus, most of the bias, is because that is how they tuned it.
Ok, fair enough. I'd be surprised if there weren't substantial viewpoint bias amongst the trainers, too.
Just about everyone with an advanced degree comes from the establishment left.
reply
42 sats \ 4 replies \ @optimism 4h
I wonder if they even know. All this stuff is automated. LLM-as-a-judge (= didn't read)
reply
At some point, though, all it has to build off of is what's been made available. I don't see how bias can be avoided if what's available is biased.
reply
42 sats \ 2 replies \ @optimism 3h
It's worse: compounded bias, because if you use an LLM to train an LLM that then trains an LLM, you have bias to the power 3. This is the FAFO part of AI.
reply
Of course. It's the same reason inbreeding is problematic.
reply
42 sats \ 0 replies \ @optimism 3h
Nice analogy!
reply