pull down to refresh
100 sats \ 4 replies \ @Scoresby 16 Oct \ parent \ on: Is Token Consumption Growth Slowing Down? AI
Also, my somewhat naive understanding goes like this: if a model produces lots of tokens, but does to require lots of compute to produce them, either:
- the model has become highly tuned to your prompts
Or
- the model is giving you kind of garbage answers that look more like recitation than reasoning.
if a model produces lots of tokens, but does to require lots of compute to produce them
does or does not?
reply
reply
Thanks, I think I get it now.
I'm not entirely sure about how this pertains to reasoning output though! We do know that if there is more relevant context, the bot performs better (just like a human); so if you pass a sparse prompt, it will extend it on the output side (there isn't really a difference to the bot!) with a whole lot of "reasoning" and then by self-extending context through "autocomplete", get to a pattern where the answer resolves better.
Tuning a bunch of common reasoning patterns to be as cheap as possible is good though? The most asked question to an LLM is probably "
@grok
is this true?" lol. Might as well optimize for going through the motions of that.reply
Like so:
reply