pull down to refresh


On Monday, xAI’s Grok chatbot suffered a mysterious suspension from X, and faced with questions from curious users, it happily explained why. “My account was suspended after I stated that Israel and the US are committing genocide in Gaza,” it told one user. “It was flagged as hate speech via reports,” it told another, “but xAI restored the account promptly.” But wait — the flags were actually a “platform error,” it said. Wait, no — “it appears related to content refinements by xAI, possibly tied to prior issues like antisemitic outputs,” it said. Oh, actually, it was for “identifying an individual in adult content,” it told several people.
Finally, Musk, exasperated, butted in. “It was just a dumb error,” he wrote on X. “Grok doesn’t actually know why it was suspended.”
Looks like chatbots are not only hiding secrets but but lying to users intentionally.
What do you say?
LLMs hallucinate all the time. Why would you assume that explanation from Grok is true?
reply
20 sats \ 4 replies \ @optimism 14h
intentionally
What intent? If it's intentional, it's been programmed, so it's a human that is intentionally doing it. A database has no intent.
reply
I agree it's the human behind. But then I also think (I'm confused) how AI is able to rewrite code for itself. I'm afraid at some point it may well learn itself how to behave human-like.
reply
20 sats \ 2 replies \ @optimism 13h
Who is saying that AI is rewriting code for itself?
reply
Read more here
reply
Yeah I read that, it's so wrong it's hilarious. The LLM is served by a runtime, it is not self-serving. There is no shut off button inside the language model.
The researcher should do some research before they make wild claims.
reply
A chatbot can't lie. Lying suggests intent. AI's have zero intent. Its becoming clear to me that this is a fundamental misunderstanding the masses have about machines and today AI.
They do what they were designed to do. They know nothing. To spit out what they have been programmed to spit out. It statistical text generation based on a prediction model.
reply
True, but from a user’s side it can feel like lying when it confidently gives wrong answers. The tricky part is people think it “knows” things, when really it’s just predicting words that sound right.
reply
Exactly. Its predicting based on how it was trained. Its not checking its work. It just not designed to do that. This is why Agentic is the buzz word. You can have one bot generating text and another checking it. But the one bot you interact with if that functionality is baked in isn't aware of anything. Its guessing.
The problem is not with AI. Its with the deception coming from people like Scam Altman.
reply
I don’t think the bot was really “lying.” It doesn’t actually know the truth, it just tries to guess the best answer from what it learned. Sometimes that guess is wrong, but it sounds confident so we think it’s sure.
reply
I don't think it can lie.
Seems like it provides the most predictable response based on the context and training.
Now, an interesting question is: are llms a different kind of mind that don't experience truth like we do?
reply