pull down to refresh

I agree that it isn't, and even the result of training isn't true cognition, because the LLM has no concept of consequences on its answers. It has nothing at stake. If I'd get a single sat for every time an LLM gives a bad answer, I'd have a bigger stack than Saylor right now.

1000% true

reply
If I'd get a single sat for every time an LLM gives a bad answer, I'd have a bigger stack than Saylor right now.

Ahahah! I don't know, that's too many sats!

696,676.49 x 100,000,000 = 69 667 649 000 000 sats

https://strategytracker.com/

reply

That's like only a year of Gemini global usage.

reply

But the answers ain't all bad, right? Ahahah

reply

IDK. Gell-Mann says it's all bullshit unless its doing actual tool calls (search and such)

reply