pull down to refresh
5 sats \ 0 replies \ @Scoresby 3 Jul \ on: Transformer based AI will not lead us to AGI/ASI and is just a hype machine AI
I pretty much agree. But I'm not sure that a next-token prediction machine based on everything available on the internet is doomed to unhelpfulness. So, they don't think -- what if that's fine? We don't need it to think, we need it to squeeze juice out of the lemon of humanity that we can't otherwise squeeze.
Here's a whimsical hypothetical: if there was a prediction market for next-token generation ("Here's a questions, bet on what the next letter in the answer will be"), and if lots ("lots" is doing a lot of work here) of bets were made every second, would it provide a similar experience to interacting with an LLM?
Probably, if the bets were all being made by real humans it would do substantially better. This may not be a super helpful insight, but maybe it points to some of the problem: there's no "survival of the fittest model" function here. models that make bad next-token predictions get some feedback, but they don't necessarily die.
We got human brains because humans with "bad" brains don't pass on their genes. Prediction markets work (if they work) because predictors who are wrong lose money. What happens when an LLM is wrong?