pull down to refresh
10 sats \ 36 replies \ @Undisciplined 23 Mar \ parent \ on: Stacker Saloon
Why doesn't it hurt our trust scores?
These would not generally be categorized as "good" posts. My understanding is that "trust" is a measure of our likelihood to zap "good" content.
deleted by author
reply
From the faq
and lose trust by zapping bad content.
reply
but then zapping bad ones is not equal to no one zapping your comments? I think bad content is like the ones got outlawed?
Many of us zap almost every reply we get, varying the amount to reflect quality. Oftentimes, these are comments that no one else zaps. Does that hurt our trust scores?
reply
Roughly: Person A trusts person B according the binomial proportion of their zapping. That is, roughly,
# A agreed with B / (# B zapped - # B agreed with A)
. (We construct a confidence interval so we can predict with smaller samples what this proportion is likely to be.)So if B is zapping a lot of stuff that A never zaps, A begins to trust B less. But, importantly, we normalize A's trust among everyone they trust, so if A isn't zapping at all for a period of time, and person B continues to zap as does everyone else, A will continue to roughly trust B the same.
When A downzaps things B has zapped,
# A agreed with B
roughly becomes # A agreed with B - (10 * # A disagreed with B)
.Note: this is only a single link in the trust graph. A can never agree with B directly, but still end up trusting B, if A trusts C and D, and C and D trust B.
reply
I think I understand the trust graph concept. How is that converted into our effect on zaprank, though? It seems like we must also have something like a global trust score.
reply
reply
I figured it was something like that.
This will tie into something darth brought up, is the algorithm non-linear in such a way as to discourage trying to game it by splitting your activity across multiple accounts?
reply
Trust cannot be created in an isolated part of the graph. Those accounts can only gain trust by someone trusted agreeing with them. If someone trusted isn't agreeing with them, they remain isolated. If someone trusted is agreeing with them, they can gain trust, but as we downzap the content they are promoting, it effectively re-isolates the subgraph and destroys the trust of the persons agreeing with the sockpuppets.
Zapping your own content also costs money so this all is less common than it would be otherwise. We only take 10% of zaps now, but it's something we've considered increasing should this behavior become more common. It's a simple way of putting pressure on the behavior should we not want to spend a lot of time doing cluster detection in our trust graph.
reply
how high trusted they are
That's the crux of my question. How trusted they are by who?
reply
How trusted they are by who?
The viewer in the case of personalized ranking.
In the case of non-personalized ranking, it's how trusted they are by everyone after we've iterated on our pagerank-like algorithm a few times. We select my personal column of trust scores from the resulting matrix, ie global trust scores are what I see all time, but most columns are nearly identical.
deleted by author
reply
That's sort of what inspired these questions. @grayruby was making fun of me because my zap of his fun fact had basically no effect on it's ranking.
reply
deleted by author
I wasn't thinking about outlawed (or just net downzapped) content. That would make sense.
reply