1. an architecture that does not imitate dead drops. firstly you would need a fundamental design change so that nodes don’t become the owners of the actual content that users post to them. in other words, dumber nodes. this will never happen on Nostr in particular.
  2. something like Vitra where we can know if users misbehave but cannot stop them from doing so. in a human network this seems ideal to me. like in real life, if someone does something to destroy your trust, you get burned and you move on. you just don’t interact with that person anymore. the key is knowing when you get burned.
  3. utilizing a heavier blockchain style system with zk-proofs to ensure that certain operations cannot be done. this is far too heavy imo, but if someone for some reason needs 100% guarantees, then i guess it’s the path they would prefer.
I think 95% of people would be perfectly comfortable with option 1 (maybe 2). You explain in the docs that it’s possible for someone to hack their client app and save things that are supposed to be deleted. The user realizes that none of their friends are weirdos and they are ok with that.
All of my friends are weirdos, but they're the kinds of weirdos I trust.
reply
Again, all your trust is client side. Any client can act legit to the network and still save everything it sees, there is no way for the network to know about it, and your right to be forgotten cannot be trusted.
BTW: I appreciate your last post, I just cannot see how what you ask is possible, and IMO it's not fair to ask for such an impossible feature to the Nostr or any other network.
reply
it all depends on your concern. i can edit the html of someone’s social post and screenshot that and it would look legit. but the original author can plausibly deny that the post existed, for example, if needed.
i just think that the designers of Nostr are not being realistic when they say it must be either 100% guaranteed or it’s not even worth considering.
life is complicated when humans are involved. give users the tools to interact with each other in reasonable ways. and if people misbehave or destroy trust, then let humans deal with that as humans do, you know?
It’s not an impossible feature btw. Iris.to is doing it right now. Some SSB apps are doing it now too. Reticulum.Network could do it as well. Nostr just has a fundamental network design that would never allow for what i’m describing.
reply