pull down to refresh
32 sats \ 33 replies \ @Natalia 23 Mar \ parent \ on: Stacker Saloon
but then zapping bad ones is not equal to no one zapping your comments? I think bad content is like the ones got outlawed?
Roughly: Person A trusts person B according the binomial proportion of their zapping. That is, roughly,
# A agreed with B / (# B zapped - # B agreed with A)
. (We construct a confidence interval so we can predict with smaller samples what this proportion is likely to be.)So if B is zapping a lot of stuff that A never zaps, A begins to trust B less. But, importantly, we normalize A's trust among everyone they trust, so if A isn't zapping at all for a period of time, and person B continues to zap as does everyone else, A will continue to roughly trust B the same.
When A downzaps things B has zapped,
# A agreed with B
roughly becomes # A agreed with B - (10 * # A disagreed with B)
.Note: this is only a single link in the trust graph. A can never agree with B directly, but still end up trusting B, if A trusts C and D, and C and D trust B.
reply
I think I understand the trust graph concept. How is that converted into our effect on zaprank, though? It seems like we must also have something like a global trust score.
reply
reply
I figured it was something like that.
This will tie into something darth brought up, is the algorithm non-linear in such a way as to discourage trying to game it by splitting your activity across multiple accounts?
reply
Trust cannot be created in an isolated part of the graph. Those accounts can only gain trust by someone trusted agreeing with them. If someone trusted isn't agreeing with them, they remain isolated. If someone trusted is agreeing with them, they can gain trust, but as we downzap the content they are promoting, it effectively re-isolates the subgraph and destroys the trust of the persons agreeing with the sockpuppets.
Zapping your own content also costs money so this all is less common than it would be otherwise. We only take 10% of zaps now, but it's something we've considered increasing should this behavior become more common. It's a simple way of putting pressure on the behavior should we not want to spend a lot of time doing cluster detection in our trust graph.
reply
I guess what I'm asking is more like if someone who is an authentic user could increase their influence by splitting their activity across multiple accounts.
They're posting quality stuff from each (and zapping it from the other) and they're zapping all the same posts from other stackers, but with half coming from each account.
reply
Yes. If their multiple accounts are engaging in a way that many people trust, they can increase their influence.
This isn't something we've attempted to address directly, but this kind of behavior would most likely show up as a very well connected sub-graph, ie a cluster, and we can penalize such clusters.
reply
That's interesting.
reply
how high trusted they are
That's the crux of my question. How trusted they are by who?
reply
How trusted they are by who?
The viewer in the case of personalized ranking.
In the case of non-personalized ranking, it's how trusted they are by everyone after we've iterated on our pagerank-like algorithm a few times. We select my personal column of trust scores from the resulting matrix, ie global trust scores are what I see all time, but most columns are nearly identical.
reply
deleted by author
reply
That's sort of what inspired these questions. @grayruby was making fun of me because my zap of his fun fact had basically no effect on it's ranking.
reply
reply
I appreciate the opportunity to attempt to help.
reply
deleted by author
reply
If we are, we're doing it very poorly, so you probably don't need to worry about it.
reply
reply
That's a little insulting. We could successfully inside trade if we set our minds to it. :)
I wasn't thinking about outlawed (or just net downzapped) content. That would make sense.
reply