1 sat \ 1 reply \ @AmadeusK525_old 6 Feb 2023 \ on: Nostr sucks: A contrarian viewpoint nostr
I've been using Nostr-based clients for a couple of days and it's very interesting to see it working, there's no denying about that. I do worry about its capabilities for scaling in a decentralized matter as well, though.
The biggest thing that I don't understand is why relays don't talk to one another. I feel as this should make everything a lot faster and, though it requires more bandwidth and processing power, it could work amazingly if a lot of people ran relays all around the world (basically like LN nodes).
Data is handled in a weird way because you still don't own your data if you don't run your own relay and publish to it (and if you do, you have to convince people to connect to it for them to be able to see your posts), so a relay could single-handedly wipe all of your data in an instance and it'd be forever lost, right? That's what I'm getting right now.
Nostr is a very interesting idea and the fact that a lot of people are investing in it is a great thing IMO, but I whole-heartedly believe that if data isn't dynamic, being passed from one relay to another and never stored in a single location, it won't work in a global scale.
Here's my pitch:
I think people have been thinking in two components for a while now and that may not always be the best solution. In the case of social media, the end user running the client wants to see everything they query very quickly, they want to be able to see posts from anyone around the world if they search for it (regardless of what relays they're connected to) and they want data security, so that their data isn't always stored in a single location (a little bit like torrents). So we need:
- Clients
- Relays
- Indexers
Indexers being nodes that query known relays and keep track of what public keys are available in what relays. Clients query Indexers to know where to look for stuff and then query that Relay to actually get the stuff. If relays talk to one another, they could keep connections alive with other relays like graphs, and if a Client asks for data from a relay that it doesn't have an immediate connection to but has other relays in common, it doesn't need to open more connections, as that seems to be terrible for slowness in the current protocol (I saw a guy saying that they tried connecting to 40 relays and things got veeeeeery slow).
I'm probably not technically proficient enough to really look into this and how it'd work, but I do believe there are better solutions to the problem than what Nostr currently offers
this
reply