The work that we do as developers typically includes a need for precision. The LLM's lack of intelligence can be well hidden in an essay it may write, which contains some facts, some claims and plenty of filler language. However when boiled down to a logic problem delivered with human language, the statistical model that an LLM is quickly breaks down. We're not looking for an answer, we're looking for the answer.
If you happen to deliver your prompt the "right" way, you can stumble upon correctness:
But, if you prompt a different way, it will be confidently wrong.
So, is this "intelligence"? Can it gather user requirements better than you? No, there is not thought behind this, it depends on you to stop asking questions when you get the right answer.