pull down to refresh
5635 sats \ 20 replies \ @Michelson_Morley OP 21 Oct 2023 \ parent \ on: Write an opinion you think is hated on SN. 2nd lowest rated comment wins. meta
:) Let me explain the experiment a bit then, since it doesn't make any difference for the results, and I would be happy to hear suggestions and criticism of any kind.
I am interested in ways to encourage people (in tight-knit communities: i call them pseudofamilies) to express terrible ideas.
Pseudofamilies (e.g. stacker.news, countries, religious sects) have collective intelligence because it's ok to be wrong, and you will be error-corrected without fear of exclusion. There is psychological safety, as google puts it.
In this way, error-correction, the collective gains intelligence. But unlike biological families where genetics determine membership, pseudofamilies determine membership by way of dogma. Axioms. E.g. If I believe that transwomen are all simply men who dress like women, then I am excluded from the trans pseudofamily.
These two phenomena lead pseudofamilies to stagnation, loss of swarm intellect. How many times must you error correct an altcoin-supporting new member before you ignore/ban them instead? The more a collective learns, the more axioms are established, and the more haram it becomes to say dumb ideas.
So to answer the question about the experiment:
-
If the winner is the most zapped one, then the incentive for players is "write something that coyly implies I know our true values". But I want to incentivize fearless error expression: I want players to "write something wrong".
-
By telling people to guess what the pseudofamily thinks is wrong, there is no implication that the writer should be excluded; it is not their own opinion they are expressing. It is a way to incentivize collective observation of what dogma/axioms we actually have.
-
The game is designed so that zapping people is hurting their chance of winning. Instead, the incentive is to zap those you don't want to win. This means players who zap are forced to either support a position they disagree with, or let it win 80k sats.
-
Conversely with zonks (which is a word i just invented to describe our "downvotes", flagging. correct me if you have better nomenclature). It also encourages new comments/errors.
In summary, I want to encourage tolerance of expression of bad ideas by letting players de-radicalize themselves though performing haram actions. If you ask a man a small favor and he agrees, he is more likely to agree later to a greater favor. To give a fundamentalist the chance to consider the ideas of their heretics, first convince them to shake the heretics hand.
- The "second to last ranked" vs just picking the lowest ranked comment is left as an exercise to the reader :)
Kudos for this interesting experiment, and the interesting theory behind it.
A thing I think about a lot wrt online communities is how to encourage "good-ness" where "good" is a tricky term to define. Maybe it's easier to say what it's not: what gets produced by the classic advertising model, where the site is hell-bent on increasing time-on-site, driving "engagement" (for a certain type of engagement), click-throughs, etc.
I think we all have a gut feel for how shitty and poisonous this is. What it feels like. So what are other ways a site can feel, what are other objective functions one could aspire to that affirm something valuable, that add something good to the world? How might one design for that, what interactions might encourage it?
There's some aspects of your "pseudo-family" idea in there, I think.
reply
We share the same concerns. The ad-model, any business model where the users aren't paying for their lunch, will lead to enshittification. It's heartbreaking to see how easy it is to make a community turn on (report) each other, how banal / normal the people who thrive on doing that are, "the people who point at witches". Show them a picture of a witch (provided conveniently by your advertiser-partners) and see the toxic fire spread.
I think of these pointer people like our immune systems T-cells, their job is to hook onto "bad guys", show them a picture (vaccine) of one and they make sure you never see a nazi/polio again. So i have sympathy for these people. Though thanks to ad-model businesses, they mostly cause auto-immune responses.
As for how to encourage non-toxicity. Your guess is as good as mine. I think stacker can overtake reddit and hacker.news within a year if they play it right. Users are not the product on this site. This is the way to do it. Just need to lobby apple to allow zaps to people (or solve that through a apple-gated sidechain). It'll explode.
reply
So what are other ways a site can feel, what are other objective functions one could aspire to that affirm something valuable, that add something good to the world?
I'd love to start a dialog around this because if SN has a generic advantage over other for-profit community sites this is it.1
what are other objective functions one could aspire to that affirm something valuable
I haven't figured this out. It's at least providing what other social media sites provide, but faster.
A bitcoin nomad stopped through @PlebLab for a few months and shared private social media market research his company conducted that overwhelming showed people wanted truth faster.2
Perhaps one of the consequences of optimizing for time on site is slowing down our discovery of truth? Perhaps an algorithm optimized for time on site directly or indirectly wants us in a state of engaged confusion. What warrants expending more time than solving a mystery?
How might one design for that, what interactions might encourage it?
If trad SM is designed to avoid it, whatever it is, then not avoiding it goes some distance.
As far as interactions, I like to think about virtualizing real life interactions. Mostly because there's thousands of years of prior art available. So I don't know, but that's where I like to look.
Footnotes
reply
This is so intriguing. I'm trying to think of what truth faster could mean.
Related tangent: I have been obsessed, for several years now, with the question of: how fast can you learn something? This question can be answered in a dumb way and a smart way.
The dumb way is operationalize 'learn' it a really shallow fashion, for instance, being able to translate foreign language vocabulary, trivially, in a flashcard, e.g., they show you "amar" and you say "to love". Now maximize that.
The smart way is to ask: what does it mean to know something? Which is a hell of a question. If we stay in the realm of foreign language learning, it might be: to deploy this word ("amar") usefully, in context. To understand how it is used, which is always a little different across langauges, even for simple things. (In Spanish, for instance, you wouldn't say "te amo" to your mom, even though in English you would say "I love you" to her.)
For a less stupid example, when it comes to extracting maximum value out of something like a book, I read so slowly. Many smart people read 100 books a year, I do a fraction of that, but I write in them, I dwell on them, I connect the ideas to other ideas, I look up articles on salient points. It is glacially slow, but at the end I can say that I possess the book in a way that most don't; I have the book's knowledge deeply integrated with the rest of the things that I understand. It is work to do this! It takes so long. And on the surface I read so much less than my friends. But the product of our having read is quite different.
So, with all that said, I'm trying to think of what truth faster could mean in this context. What kind of truth is it that people seem to want? What can they do, as a result of having appertained those truths? What would the ideal experience be? Is it closer to 100 books, like my friends do, or five books, like I do?
reply
Your are wise.
How fast can we learn something? What an interesting question. Autistic savants show us that the brain can do things like perfectly drawing a city in every detail after a single helicopter ride (like your flash card). Indeed it seems that what the neurotypical brain is doing is "forget as much as possible", and savants are disabled in their ability to forget.
Have you heard / read the philosopher/psychiatrist Iain McGilchrist? His books/lectures/videos on the way our brains work, specifically how our two brain-hemispheres interact with the world. You'd like him. The right hemisphere always keeping context, the left ignoring as much as it can in order to focus. Right hemisphere looks for signs of predators, left hemisphere looks at the prey it can grab. Here's a short clip where he discusses the importance of context.
I hope truth faster means honesty. That's what I want and seek out. Weird people, excluded, horrible, arrogant—if they are honest i want to read and hear them. Hook me up with analog cable to the minds of every idiot and nutcase out there. That'd be fast truth.
Hopefully @k00b can get us more info on that research!
reply
I love McGilchrist -- I've seen / listened to a number of interviews w/ him and really respect the work he's done. I have yet to read his masterpieces, but they're on my list. (It's such a commitment for me, as mentioned earlier, but I really do want to get to them.)
As part of my obsession with this idea of learning stuff, I've dabbled in some of the accounts that you've described, of savant-like learning. It's been notable to me that of the examples I've come across, the learning in most cases comes with crippling downsides. Borges wrote a famous short story about one such example. A related idea, which you referred to, is the massive process of neural pruning that takes place as babies grow. Forgetting is a crucial part of being in the world. Without a good forgetting algorithm, we are crippled.
Point of all that is that simply "recording" things is a poor analogue for what we usually mean by learning. What we really want, usually, is something more approximated by "synthesis". I think this is consistent with the truth faster examples that @ekzyis and @k00b are giving -- they want, with maximum speed, to integrate the truth with their models of reality. (I think this is what they're asking for, having read the posts.)
I've been noodling on what this would mean in an online-community context, and what it would take to build for it. It would be fun to discuss in some venue.
reply
research his company conducted that overwhelming showed people wanted truth faster.
This is fascinating. I've mentioned this elsewhere, an idea for a feature I want in a social network: Live replay of typing. Just like in google docs, seeing people type, misspell, correct, pause, etc, before submitting, is this not truth faster? I see it as a midpoint between a phone call and a text message, in terms of emotional information carried. Would mitigate botting as well.
reply
This is fascinating.
Right?! It's so obviously true yet so obviously not what's being optimized for.
I've mentioned this elsewhere, an idea for a feature I want in a social network: Live replay of typing
Well some of the hidden truth of my comment is that it probably took me 40 minutes to write lol.
Just like in google docs, seeing people type, misspell, correct, pause, etc, before submitting
I think that'd be really neat but are we conflating transparency with truth? Truth is knowing an animal was killed for your meal. Transparency is watching it die before you eat.
Transparency is truth slower I think. But to your point, also truth verifiable.
I've taken to modifying truth faster to be relevant truth faster which I'm not sure is in line with the research, yet makes more sense to me.
reply
I can't find the research from pleblab, do you have a link? Sounds very interesting.
By truth faster, does it mean e.g. twitter/tiktok, live content from celebrities/experts, minutes away?
What is relevant truth? What is truth for that matter, are we talking mathmatical publications or are we talking content from people who users trust? This is all very intriguing
reply
It's private research that I've never seen myself. A friend just shared the tldr of the results with me verbally.
I'll reach out and see if he can share the full report with me and double back to summarize them (assuming they're meant to be kept private).
By truth faster, does it mean e.g. twitter/tiktok, live content from celebrities/experts, minutes away?
Format was irrelevant afaik. Authoritativeness of sources were probably marginally important too.
I'm exclusively guessing, but I think truth just means factual, fast means somehow obviously factual, and relevant truth (being my variation) means facts the person cares about.
reply
Well some of the hidden truth of my comment is that it probably took me 40 minutes to write lol.
Yesterday, I wrote a Github comment and after finishing it, I looked at the clock and thought, wtf, that took me 40 minutes!
Then I looked again and realized it was 1h and 40 minutes. Still can't believe it, I must have done something else in between, lol
reply
I love this. I bet you are a kickass developer. And, insofar as you are not a kickass developer, you are moving with great velocity toward that endpoint.
Writing is thinking. Taking time to craft your thoughts about a commit, and to express them clearly, is a gift to others, and also like an intense workout for you.
Now I want to check out the repo and read these commit messages :)
reply
Now I want to check out the repo and read these commit messages :)
I was talking about this Github comment, not about a commit message :)
Fortunately, I don't take so long for commit messages, lol
If I can't find a good commit message within seconds (max 1 minute probably), then my commit is not a good commit!
But the pull request description also took some time, I think. But that was definitely worth it. Makes life easier for @k00b to review and for me since I have written down my approach and can use it as a reference now. The TODOs are also nice to keep track of my progress.
@bitcoinplebdev inspired me to write better PR descriptions like this one :)
reply
Writing is hard. But worth it.
reply
people wanted truth faster
lol, I just realized that's also the reason why I am here
I am here to learn and hear all kinds of opinions so I can make up my mind on my own.
I guess that's what "truth faster" means.
But it shouldn't be a race. Else you might get lost.
Or find yourself clinging to a "local maximum", believing you find the truth when in fact you just find your truth
It comes back to time preference again, I guess.
reply
Thanks for sharing the motivation and experiment design!
I <3 the experiment but unless I'm misunderstanding it doesn't benefit from being confusing. Could the prompt be better written as
Write an opinion everyone loves to hear on SN. Lowest ranking comment wins.
Your prompt is like a double negative.
Conversely with zonks (which is a word i just invented to describe our "downvotes", flagging. correct me if you have better nomenclature). It also encourages new comments/errors.
We've been calling it a downzap, but zonk is fun.
reply
Hm you might be right about the title. I've been told many times that i use double negatives excessively.
Write an opinion everyone loves to hear on SN. Lowest ranking comment wins.
I worry this title would be detrimental for lurkers; people who just zap but don't comment. They would end up seeing a list of our axioms/dogma and since they're here they'd agree and zap them, thus reinforcing the dogma for all. The benefitors would be the small number of people who actually tried to win the bounty. Im not sure. I'll think on this more.
downzap— excellent. Reminds me a bit of reddit (hm). Trying to think of the antonym of a zap. "Unzap" is a contender, but would be misleading because people might think they get their money back if they click it after accidentally zapping. Paz, zink, zuck, zonk, they don't work because they don't communicate that the action entails sending LND. Downzap is best.
reply
Paz, zink, zuck, zonk, they don't work because they don't communicate that the action entails sending LND. Downzap is best.
I think that doesn't matter so much. That you know that zap is sending sats over lightning is also something which wasn't immediately obvious. Only experience told you that that's it. Not sure if there is even a definitive definition on that? It's probably just a swarm thing.
We can now build on that swarm knowledge using zonk. Zonk is related to zap and zonk sounds as if it's a sad sound in minor / disharmonic (?), so I think you just need to explain it once to each person (like zaps) and they won't forget.
Also it's too late. I'll use zonk now haha
reply
The game is designed so that zapping people is hurting their chance of winning. Instead, the incentive is to zap those you don't want to win. This means players who zap are forced to either support a position they disagree with, or let it win 80k sats.
You are assuming all players are doing greedy action, but I'd say the financial incentive is too low for that.
For me to go into full-greedy-actions-mode (instead of cooperative actions that look for genuine winner) the financial incentive would need to be substantial. I just zapped replies that I genuinely agree with.
reply
I just zapped replies that I genuinely agree with.
Excellent! Thank you for playing. I actually don't assume anyone is doing anything due to greed. Almost everyone willl (i hypothesize) behave as you describe, they'll zap what they agree with.
But what will people comment?
reply