pull down to refresh
10 sats \ 1 reply \ @jeff 6 Feb 2023 \ on: Nostr sucks: A contrarian viewpoint nostr
I think this goal:
...should be second to:
And then, the twitter use-case gets built on top of that.
If you do that, then what you can do is split all content into two techniques for sharing:
- recent content
- archived content
And split the network into the following architecture:
-
clients - very similar to now, except they have three new optional jobs baked into the standard
- do PoW, maybe per character, per post or based on posts per unit-of-time.
- deletes take LOTS of PoW, and climbs with age of the post.
- every so often they backup a material chunk of all content that they've published. posting that to an indexing node using content-addressable techniques and more PoW.
- query an indexing node for the pointers to data about the user. Optionally, queries require PoW.
-
relays - expensive VPSs, optimized for short-term message distribution. Nobody queries this. People just post and subscribe to anything recent (~6h or ~12h, config'd per relay) and going forward. The cost to host this would be much better than now, and power wouldn't accrue as quickly, because it just has to handle throughput, not throughput and storage and querying.
-
indexing nodes - Similar to relays except it's just a map between users and pointers to content hosted somewhere, using content addressable techniques. All signed by the user. So, 20 posts, get bundled together into a block-like file format, and the content addressable information
-
long-term storage - This is where history lives in an archived state. Adapters can be written, so that users can host just their own content, or pay somebody else to do it for them. But the indexing nodes get queried, and maintain the pointers. Want to do a delete? Update the indexing node with an entirely new block that replaces the old one.
This transfers the onus to the client to download blocks of data, parse it, then cache it it locally and efficiently, rather than taxing relays with sloppy/duplicate/lazy pulls or expecting a relay to have infinite retention (they won't, I'm 100% sure they won't). The clients that do this job efficiently will win, because they won't take nearly as much bandwidth.
Required Reading:
- How IPFS is broken - https://fiatjaf.com/d5031e5b.html
- Why IPFS cannot work, again - https://fiatjaf.com/b8e2f959.html
Heyo, you have a very similiar idea to mine (which I posted as a reply to OP), although I didn't think about a separate component for storage, that could work well. I'm a little confused as to why more people haven't proposed this architecture to Nostr, the problems with the current implementation are prety evident
reply