pull down to refresh
1337 sats \ 0 replies \ @k00b 4 Jan \ on: LLMs and SN, redux meta
I'm torn on these too. I don't like them because such AI information delivery people are dishonest about the origin and inconsiderate of the accuracy and brevity of what they deliver. Their intent is often solely to extract value, whether as attention or sats.
On the other hand, they can deliver some incidental value even if it's debased by their intent.
Ultimately, my feelings are concerned with intent more than anything else. If Leibniz's discovery of calculus took more effort than Newton's, I wouldn't value it higher than Newton's. Probably the opposite. So my feelings aren't about "proof of work" at least. Proof of work is a proxy for intent where intent can't be measured. Yet, I don't really need proof of intent either. I want something valuable and proof of intent is a proxy for verifying I'm receiving something of value.
Ideally we are skeptical of all information sources, llm or not. The best information sources assist you verifying the results for yourself. One way you can tell this LLM content from human sources is it doesn't attempt to do this or have the self-awareness to measure and report on its authority.