pull down to refresh
0 sats \ 6 replies \ @faithandcredit 28 Oct 2022 \ parent \ on: The Firings Begin: Twitter CEO, CFO, & Top Censor Escorted Out bitcoin
If you want to "solve" spam (it evolves and is a constant fight), messages need to be vetted and signed by a 3rd party that specializes in fighting spam. If done properly there would be competing 3rd parties that fight spam which would prevent actual censorship and what not. And users could even have a choice of wanting to see unsigned content, or content only signed by x, y and z.
Spam isn't solved via a thies party verification.
Spam is solved by making it an unprofity exercise for the spammer; this is one of the thing POW was originally made to solve
reply
To me it seems like having a third party deciding on what is spam and what is not just raises an attack surface on free speech. They could decide your inappropiate opinion is "spam" and simply block you.
By imposing a real cost, it would no longer be profitable for spammers. 1 sat is not a lot for the individual, but for a spammer it would be a significant cost.
reply
To me it seems like having a third party deciding on what is spam and what is not just raises an attack surface on free speech.
It actually protects it, because when you have Twitter deciding what is spam and what, look what happens. A social media need functionality where 3rd parties can sign or give checkmarks, it should not only be the social media company. Also social media is a huge project, u manage servers, UX, content moderation. No single company can do all things well. So split up social now! Lets have voluntary 3rd party vetting and signing of messages. Lets see what happens
reply
It's called government and this is a terrible idea
reply
there is no use of force involved its voluntary so by definition its not government
reply
If you want to "solve" spam (it evolves and is a constant fight), messages need to be vetted and signed by a 3rd party that specializes in fighting spam.
Do they? It seems like you are talking about companies who decide what is spam and what not with "messages need to be vetted".
My definition of spam:
Spam is any kind of unwanted, unsolicited digital communication that gets sent out in bulk.
So adding some amount of "effort" to a message solves spam, since for the spammer, it will not make economically sense since he will lose more than gain with spam (think unsolicited advertisement which only works if sent out in bulk), no?
reply