This is a follow up to my post yesterday Punishment is a Public Good. Thank you to everyone who read, commented, and zapped that post. I really appreciate it.
Now I want to think through the implications of that topic for Stacker News, specifically. Before I jump into it, @Signal312 started this whole conversation with his post Low value, throw away comments - why are they made?, so if you want to follow the entire discussion, check that out.
The Problem
In brief, there's a free-rider problem with downzapping. Whoever downzaps bad content pays the full cost of the downzap, while we all benefit from bad content being outlawed. That means we each have an incentive to let someone else do the downzapping, which leads to less/slower downzapping than would be optimal.
Possible Solutions
Status Quo
First off, we may not need to do anything. The tools currently available to us (downzapping, muting, ignoring) may be sufficient. As I review the literature on public goods games, oftentimes simply allowing peer punishment performs better than more complex punishment schemes.
It also appears that individualized positive rewards for contributions, like we have, perform better than punishment in some experiments.
Use the Current Rewards System
To me, the most obvious option (which certainly doesn't mean it's the right one), is to fold this into the rewards system. Downzapping bad content early could be treated the same way as zapping good content early. There are disincentives to downzapping good content that are already in place (it hurts your trust score), so we shouldn't have a major downzapping problem.
I'm thinking that "bad content" would mean content that got outlawed. There could also be a cap on what gets rewarded, by not counting downzaps on content that is already outlawed.
Use the Trust System
In the comments of the linked post (#619707), @Signal312 and @didiplaywell were discussing the possibility of having muting impact trust scores. It makes sense to me that muting someone should drive your trust score with them to zero.
Would something like that be sufficient to reduce the reach of spammers, to the point that they don't bother trying? Btw, check out their comments, because there's more detail and substance than I'm relaying.
The trust system is part of developing a reputation on SN, in addition to your literal reputation amongst the stackers. Many public goods experiments have found reputation to be very effective at reducing anti-social behavior.
Stacker Sherrifs
Here's a fun idea that came out of talking to @Coinsreporter and @carlosfandango (#620797). We could have sheriffs of Stacker News! How on brand is that?
It turns out there are some papers demonstrating that designated punishers can be superior to strictly relying on peer punishers. One of those papers even finds that randomly designating the punisher each round is the most stable solution to sharing the costs of punishment.
This could be implemented right alongside current downzapping. The sheriff(s) would be notified of their status first thing in the morning and that's that. The literature appears to indicate that human altruism will take care of the rest.
Extra bells and whistles
- There could be a tip jar that gets divvied up amongst the sheriffs.
- They could get the rewards described above for downzapping.
- Their downzaps could be stronger.
Conclusion
We may be fine as is, but there are also some interesting options to pursue that would reduce the likelihood of a serious spam problem emerging. Downzapping and reputation are likely the most important tools to have and we already have them.
Minor tweaks to the rewards and trust systems could help, but should be undertaken with caution. We don't want to over incentivize downzapping, since we're probably near the optimum already.
Simply selecting a few random sheriffs everyday would be extremely on brand and there's a good chance that it would help.