I don't see why muting needs any additional incentivization: it's already free and it only effects the muter. Even if it becomes something that impacts trust scores, the fact that it's free means it doesn't need to be incentivized.
Down-zapping needs to be incentivized because it's costly and the equilibrium solution is too little downzapping because of free-riding.
I'll try to get a post out about this soon and we can hash it all out there.
The incentive to muting, understood as flagging, is on trust score itself, to reward the moderation, even at personal level. Down-zapping could be rewarded in the same way, but at that point I think that we could ditch down-zapping altogether and stay solely with muting.
reply
Looks like the trust score is the "web of trust" described here: #8349
Is there more info on how it works currently? The link in the post (https://stacker.news/wot) no longer works.
Also...here's an idea about muting.
How about this
  • Have a way to mute a low-quality poster, for free.
  • But ALSO be able to do a different type of mute, a 10x mute, that would cost sats. That would be a "mute with emphasis". And that type of mute would be a signal to the rest of us that the poster is to be avoided. And then that type of mute could be rewarded.
  • And then remove the downzap entirely.
That way you're not wasting time with individual posts, rather you're acting at the user level. More efficient?
reply
There might be something there. I'm going to tinker as I go...
What we have to replicate is the standard free market:
  • If your product is good, people buys it (zaps), your trust score improves which leads to more sells -> you grow
  • If your product is bad, people do not buys it (equals mute), your trust score downgrades which leads to less sells -> you go down
One key characteristic of the market is that each opinion is from an actual person, that is, sybil tactics are extremely difficult or impossible. Opinions are free, so you don't have to pay to say that something is good or bad, which helps information to be broadcasted more efficiently and accurately. Now the reason we have "paid opinions" (i.e. down-zaps) is to discourage sybil behaviour. But this also discourages free opinion, because having to pay to say something is bad is functionally a penalization to that kind of opinion as much as a penalization to spam. Now to have free opinion but to penalize sybil tactics you would have to pay to enter SN upfront, but that would discourage everyone from entering SN. So, an account must be free, to let people in, and opinions must be free, to not to discourage flagging. How can we allow that while avoiding sybil tactics?
No reward other than trust score improvement can be applied, again, to avoid sybil tactics. Trust score rewards do translate in gains at the moment of making material that gets compensated. That PoW workflow is a great spam deterrent. A sybil tactic there is not a problem if the material is good, only spam is tackled that way which is correct.
So, the only conundrum to work out is: what PoW can we implement so to increment the probability of a user to be an individual, in order to get the free market scheme working correctly? Pure behaviour is not a good indicator thanks to bots, so the solution must consist of a cost in sats. It can't be upfront, it can't be at the moment of expressing opinion. Thus, where? The only way I see it working is by considering the zaps you gave, which if I'm not mistaken is considered already in the trust score.
So there is your PoW, in your trust score, which leverages your work and interactions by considering the zaps you received and the zaps you gave.
It could then be as follows:
  • Down-zapping can be discarded, just to avoid confusion with "negative muting". Being "outlawed" remains at discretion of a territory owner.
  • Everyone can mute, so that the consequence of muting only applies to the one who mutes. The user muted is not affected in any way.
  • If a person haves enough trust score (and proportional to it), apart from muting he can also "negatively mute", which does impacts the muted user trust score. Of course negative impacts are balanced against positive impacts (getting zaps and interacting and etc) to get the end result.
  • The trust-score should be shown next to all users, which is the only way to broadcast the information.
reply
I just put out my first post on the topic: #619793.
It's not specifically about Stacker News and downzapping, but it's the conceptual jumping off point that I'll use for that discussion. Of course, if you want to put your thoughts together into a post, then I'll just jump into the comments.
reply
Just to keep the thread of discussion I prefer to continue in your post if you don't mind. I can paste the above comment there if you think it adds to it
reply
Absolutely, it's a great comment. I still need to think more about trust scores. I'm not aware of any analogue to them in the experimental econ literature.
reply
Thank you Sr :)
reply
Being "outlawed" is discretionary?
I think "outlaw" comes from many downzaps. Let me find a recent thread about this.
I found it: #615359
reply
It does! But so far I know territories can also be moderated by the owner, isn't?
reply
Owner can moderate territory but outlaw extends beyond territory
I own a territory, let's test if I can moderate you into an outlaw lol
reply
Oh no! :0
reply