pull down to refresh

Outrage focused on these <pick your model> was naughty outputs doesn't seem as valuable to me as outrage that the model just made up a line and added it to my dataset. Just, we don't find the latter kind of mistakes outrageous.
Some people may want their llm to talk about hitler a certain way, and others may want it to always use inclusive language, but I assume that almost everybody wants the model to not invent things without telling us.
The morality hype may put pressure on the big players, but it's not necessarily pressure to make their models more reliable or more useful. It may just be pressure to make their models insipid when dealing with certain topics.
That's a good point. What would be the right kind of outrage/pressure?
reply
I was thinking about this post about trust in LLMs when it comes to the code in pacemakers. The author ends with this post script:
as I was writing it I discovered that I am truly horrified that my car's breaks will be programmed by a contractor using some local 7b model that specializes in writing MIRSA C:2023 ASIL-D compliant software.
Outrage based on fuck ups that kill/harm people is already here -- but maybe we can expand that base to include less catastrophic outcomes: "No, I'm not using a model that gets basic details wrong."
reply
98 sats \ 1 reply \ @carter 17h
we need to be able to tune it to our preference
reply
Yes! NPU farm is on my xmas wishlist.
reply