pull down to refresh
88 sats \ 18 replies \ @k00b 14h \ on: Why OpenAI’s solution to AI hallucinations would kill ChatGPT tomorrow AI
It's fun to think about how portable this is to humans. Humans also have an incentive to lie and misrepresent their confidence level.
Lately, me and my friends have been using this trick, we just go, 'How much you wanna bet? to see how confident someone really is about what they’re saying. It actually works great, because more than half the time, the person ends up admitting they’re not that sure after all.
reply
I have a really hard time trusting anything a person says when their confidence level is always >90% regardless off the context. They're 'set' on believing or having others believe they are almost always right and it's extremely hazardous to listen to them. (This might also be why I'm not super pumped on AI. I don't find false confidence reassuring like most people seem to - I find it dangerous.)
reply
I know a few people who are always super confident like that. Some of them I know well, and I get that it’s just their personality, those are the ones we usually try to bet with. But when I don’t know the person that well, I don’t do it, and most of the time I just stay quiet, even if I know they’re wrong.
When it comes to AI, though, that kind of blind confidence is actually dangerous. A lot of people are gonna trust whatever it says without question, and that can go really bad, even deadly, like it’s already happened a few times.
reply
The danger isn't the confidence; it is the trust. The same trust people put in shit they see on TV, read on FB or X, in the newspaper, or what their cousin said. This is from a time back when people's only exposure to what was going on outside their immediate circle was coming from the paper and the evening news on TV.
Somehow, a publisher's implicit integrity remains, except it doesn't exist. And since the last decade or so, this has been actively (and nowadays overtly) weaponized.
Don't believe my word on it though. I'm biased, probably wrong, and just another fool tapping keys on his keyboard. No heroes.
reply
The danger isn't the confidence; it is the trust
Agreed. I actually want the AI to be confident. One of my gripes with it is that when I ask it for coding help, it sometimes gives me 3 different implementations. I don't want 3 implementations, I want you to be opinionated on what's the best one. If I don't like it, you can trust me to ask you to re-evaluate.
So the problem isn't that AI's are too confident. The problem is that users put too much trust in the initial output.
reply
reply
That's interesting.
Maybe it's because my trust level in the AI is already low, so I don't expect to actually use any of its implementations (at least word for word). I'm mainly using it to get a sense of "where in the code should I be looking", and "what's the general idea for the solution?" as a quicker alternative than reading and crunching all the code in my own mind.
I'm still gonna crunch enough code to understand what's going on, so the purpose of the AI is more like "find me the best jumping off point"
reply
Yeah, I rarely use it for code except when I try something new to see what it can do. But then, I've spent 95% of my time reviewing other people's code the last decade, so for me it's not much use in production. I've tried doing AI-enhanced code review where I feed it the resulting code of a diff, but it didn't really work well for me on c++ code. I'm still a skeptic when it comes to production usage really. Maybe autocomplete, but the one in my rich-ish text editor works fine for me.
reply
Lower temperature, non-reasoning may improve here. Also
***IMPORTANT: BE CONCISE!***
at the bottom of the system prompt may work due to the horrors of chat training. Which to me is still the most ridiculous thing ever.I still have to test InternVL 3.5 (#1194686) in coding abilities because they claim to beat Claude 3.7 with a 14b model, so I'd like to see what's what with that, when I get a moment of peace.
reply
It's funny how too much reasoning leads to lower confidence / wishy washy answers.
Very human-like behavior.
reply
Spot on! Regular people don't even know what an LLM is, they just see 'AI' and think it's always the truth. You know that Samsung Z Fold ad? It's got AI, and this woman films a bunch of skincare stuff, asking her phone what she should get. That's a weak ad because the AI could've just pulled its answer outta some random website.
reply
Haven't seen and can't find that ad. But that is exactly what people use AI for, right? I had fun watching the "nano banana prompts" (#1218791) from the other day - it's completely useless because you can now dress brad pitt up, photorealistically. lol. But I'm sure this is the amazing functionality we all need in our lives.
reply
this
reply