pull down to refresh

I agree that it isn't, and even the result of training isn't true cognition, because the LLM has no concept of consequences on its answers. It has nothing at stake. If I'd get a single sat for every time an LLM gives a bad answer, I'd have a bigger stack than Saylor right now.
If I'd get a single sat for every time an LLM gives a bad answer, I'd have a bigger stack than Saylor right now.
Ahahah! I don't know, that's too many sats!
696,676.49 x 100,000,000 = 69 667 649 000 000 sats
reply
33 sats \ 2 replies \ @optimism 4h
That's like only a year of Gemini global usage.
reply
But the answers ain't all bad, right? Ahahah
reply
33 sats \ 0 replies \ @optimism 4h
IDK. Gell-Mann says it's all bullshit unless its doing actual tool calls (search and such)
reply