pull down to refresh

I was trying to remember if the chiefs are 3 time champs or just two. Instead of using google I decided to ask two AI chatbots that you pay with bitcoin.
  1. ppq.aI: I asked my question was prompted to pay 361 sats which I paid and got no answer. Money stolen
  2. unleashed.chat: I created an account last month and still had some credits on it.
My first question I asked was:
Who won the last 10 Super Bowls.
The chat bot answered starting with 2021. So I asked the chat bot next
who won the super bow in 2023
The answer: The winner of the most recent Super Bowl, held in February 2023, was the Philadelphia Eagles who defeated the Kansas City Chiefs with a score of 38-35. This victory marked the second championship win for the Eagles in franchise history.
Which is completely wrong!! And this is why I have an issue with AI. Why couldn’t the AI just say sorry my data set ends at 2021 please keep your questions about football up to the 2021 season. Isn’t part of intelligence is knowing your limitations and not give answers you know nothing about.
In the end I should of just used Google!
My admittedly limited understanding is that AI's of this sort don't know anything, so they can't tell you if they don't know something.
They compare your prompt to other similar prompts they've seen and produce something that looks like the responses to those.
If most responses to a certain sort of prompt were "I don't know", then I expect that's what the AI would say too.
reply
Isn’t part of intelligence
Chatbots aren't really an intelligent beings.
reply
At their core, LLMs are just statistical models. They don't really have a concept of right or wrong information, just likely patterns of words.
reply
And what is intelligent about that?
reply
On one level, not much. On another level you could argue it's how most (in)intelligent humans operate - biases and assumptions and fuzzy patterns and what "sounds good"
reply
Money stolen! You're robbed my dear friend.
I don't believe AI at all and don't want it to be used as information provider. AI will do more harm than good if applied this way.
reply
The AIs that transact in sats seem to be way behind. Hopefully they will catch up.
reply
0 sats \ 2 replies \ @k00b 8 Apr
These models don’t reason yet. They predict what will be said in response to the prompt based on its training data.
reply
That’s my issue with this whole AI thing. Aren’t reason and intelligence synonyms?
reply
50 sats \ 0 replies \ @k00b 13 Apr
Somewhat, yes. I'd say reason is an expression of intelligence that's often distinct from prediction (which is also an expression of intelligence). So, LLMs are doing something intelligent, but aren't capable of expressing all forms of intelligence.
reply