It may not be particularly profound, but I was using ChatGPT to help me with some code and at one point I realized the way I was asking it questions was obnoxious. Or would be if it were human. Because a human would get impatient (for a good reason), and ChatGPT answered with all the patience in the world like you'd expect a chat bot. I was asking it that way, because it was the easiest / laziest way; my convenience was the only factor and I had zero regard for the bot's patience, knowing it was infinite. (And I know its computational resource aren't, but that's different and only partially correlated.)
So "Why would a human get impatient?" I asked myself. Because a human, when asked my question, gives me their time. And they expect a reward in return. Usually some sort of satisfaction that they helped a fellow human being understand something; that their knowledge was useful and if they teach it to me, I will also put it into good use, thereby extending its reach, creating positive feedback loops advancing humanity that may bring them more prosperity. Along with the more selfish benefits like a chance of the gift being reciprocated directly, social recognition as a good teacher (potentially with prospects of monetizing it) or just as a good, helpful, 'selfless' person.
In terms of human action it's a transaction and they expect to profit from it. Patience is the amount of resources they're willing to invest they believe they will still get a positive return on.
If I had asked a human the way I did ChatGPT, they'd quickly and justifiably have run out of patience; a stop-loss, probably accompanied by an emotional reaction, a disappointment with their miscalculation and resulting bad trade. I'd have had to make more effort to have my question answered (perhaps beyond the effort I was willing to put in - I do my economic calculation too), and depending on the person, it might not have gotten answered at all - because some people are more patient than others.
But because it was an LLM, it did end up advancing my knowledge. I first thought it's because the incentives were different, but at the end of the day there are people behind the tool and they have largely the same incentives. What's different is the numbers in the profits and costs calculation.
this territory is moderated
It touches on some interesting stuff about what we find interesting in communication, or really, an information-theoretic exploration of the same -- in other words, part of the "reward" is not the usual way we think about rewards, but an inherent property in the communication itself.
I hadn't thought of it the way that you framed it here, but I do agree, there's such tremendous alpha in getting to ask all your dumb questions, basically for free. When I talk to GPT, in the back of my mind I have a process that asks: how much would I have to pay a consultant to give me this answer, and how long would it take to get it? And the answer is often $300 / hour and a week; vs two cents and twenty seconds.
reply