The problem we have now is clusters of trusted users, perhaps managed by the same person, upvoting themselves. If they behave good long enough on each account, then they could upvote their own content with trusted accounts and there wouldn't be silence. We also have no way of determining whether a user saw something and didn't upvote it - unless of course we started tracking what users see which is a no go.
The best thing to do likely is the straightforward thing: for trusted users, show a ᐧᐧᐧ (dot dot dot) next to posts and comments and let them tell us they don't like something. It wouldn't even have to de-rank the post necessarily. It could just reduce the trust between those users.
Edit: I need to think on it more and come up with some approaches ... There are likely graph algorithms that would allow me to detect clusters and reduce their significance ... I could also switch us to private WoT (ie every user sees their own feed) but it's probably too early. Lots to consider
Ah got it. I wonder if this is a problem at this stage of the site's maturity, or if it can solve itself over time.
When there are thousands of people upvoting good posts, would a bad actor still be able to maintain thousands of trusted accounts to upvote their own stuff?
Your second point about whether or not a user has seen something also makes me wonder whether the problem is temporary or permanent.
Won't content be harder to miss as the site grows larger if users are incentivized to upvote things the community will also find valuable?
reply
All good points but on the last point we have no such incentive yet.
reply