LLMs are not AIs in a way an average person thinks, LLMs are really dumb, actually. The problem is they achieve spectacular results and this may lead to some companies that develop critical system to decide that those results are good enough for them.
An LLM may have a similar or even lower error rate than humans at the job but the issue here is that when a human makes mistake we review and update procedures so we never ever stumble upon the same problem. When an LLM hallucinates good luck finding the actual reason to that.
With human-run systems we are on a very slow but steady path of quality progress. With LLM-run systems we may even have a big jump at first and a very predictable error rate but one that really never goes down.
You're right. I find them great to summarize content, and provide sources.
Sometimes they just make up sources though, so you have to actually check.
reply