The criticism is extremely valid, since people forget that, like any software or machine that depends on human input, form matters. It is worth remembering that, unlike marketing, the word “intelligence” is not something that is present in these things.
If it depends on a specific form of imputation to respond correctly, it is because it continues to be what it always was, a calculation presented in the form of language and not something truly intelligent.
When you learn how it works.. these mistakes make sense.
The criticism is extremely valid, since people forget that, like any software or machine that depends on human input, form matters. It is worth remembering that, unlike marketing, the word “intelligence” is not something that is present in these things.
Fake news.... I just asked it "how many b's are in the word blueberry" and it answered 2.
I just got it to respond incorrectly to
how many r in congratulationshttps://chatgpt.com/share/689a478b-e6ec-8008-b45b-4fe285cb27a2 even when I said to check with pythonThis is old explanation of why it gets it wrong. https://www.reddit.com/r/OpenAI/comments/1haxhjk/can_someone_explain_exactly_why_llms_fail_at/ I also noticed it seems to do better if it "thinks"
Maybe a bug in Python?
Or perhaps the word "congratulations" really does have 2 Rs
Either way, I would just trust the language model. Who are we to criticize it?
Maybe ChatGPT is Japanese
If it depends on a specific form of imputation to respond correctly, it is because it continues to be what it always was, a calculation presented in the form of language and not something truly intelligent.