pull down to refresh
190 sats \ 0 replies \ @didiplaywell 23 Jul \ on: Punishment is a Public Good econ
(To continue the discussion, on this same lines, that @Signal312 started in this post, I'm pasting my comment here to)
In regards of compensation and punishment within SN, what we have to replicate is the standard free market:
- If your product is good, people buys it (zaps), your trust score improves which leads to more sells -> you grow
- If your product is bad, people do not buys it (equals mute), your trust score downgrades which leads to less sells -> you go down
One key characteristic of the market is that each opinion is from an actual person, that is, sybil tactics are extremely difficult or impossible. Opinions are free, so you don't have to pay to say that something is good or bad, which helps information to be broadcasted more efficiently and accurately. Now the reason we have "paid opinions" (i.e. down-zaps) is to discourage sybil behaviour. But this also discourages free opinion, because having to pay to say something is bad is functionally a penalization to that kind of opinion as much as a penalization to spam. Now to have free opinion but to penalize sybil tactics you would have to pay to enter SN upfront, but that would discourage everyone from entering SN. So, an account must be free, to let people in, and opinions must be free, to not to discourage flagging. How can we allow that while avoiding sybil tactics?
No reward other than trust score improvement can be applied, again, to avoid sybil tactics. Trust score rewards do translate in gains at the moment of making material that gets compensated. That PoW workflow is a great spam deterrent. A sybil tactic there is not a problem if the material is good, only spam is tackled that way which is correct.
So, the only conundrum to work out is: what PoW can we implement so to increment the probability of a user to be an individual, in order to get the free market scheme working correctly? Pure behaviour is not a good indicator thanks to bots, so the solution must consist of a cost in sats. It can't be upfront, it can't be at the moment of expressing opinion. Thus, where? The only way I see it working is by considering the zaps you gave, which if I'm not mistaken is considered already in the trust score.
So there is your PoW, in your trust score, which leverages your work and interactions by considering the zaps you received and the zaps you gave.
It could then be as follows:
- Down-zapping can be discarded, just to avoid confusion with "negative muting". Being "outlawed" remains at discretion of a territory owner.
- Everyone can mute, so that the consequence of muting only applies to the one who mutes. The user muted is not affected in any way.
- If a person haves enough trust score (and proportional to it), apart from muting he can also "negatively mute", which does impacts the muted user trust score. Of course negative impacts are balanced against positive impacts (getting zaps and interacting and etc) to get the end result.
- The trust-score should be shown next to all users, which is the only way to broadcast the information.