Let's start a contrarian discussion on why the nostr architecture is not decentralized and won't scale
Is there even an argument that nostr does scale? I haven't had the time to examine it as closely as I should. But my understanding is there's no mechanism to spread the load on popular relays; even simple things like avoiding DoS attacks aren't clear yet beyond "paid relays". Which of course, isn't a very decentralized way of moving data around.
At small scale these problems are probably not a big deal as lots of people can step in to run new relays. But at Twitter scale, if big relays went down it'd take quite a lot for someone else to take up the slack and for users to switch over.
reply
Nostr is a hobbyist project right now. It probably scales to about 1 million users with the 200 or so relays run by volunteers
Need more and bigger relays for it to scale to say 10 million. Hopefully companies like Tidal which have PB of storage and lots of bandwidth can help
Nostr definitely needs help to level up into serious infrastructure
As a proof of concept, it is surviving ... for now
reply
Nostr definitely needs help to level up into serious infrastructure
Which will certainly centralize it.
What nostr needs is a better design that allows large numbers of people to collectively run the nodes that make it work, without coordination. But nothing I've seen indicates that doing that hard work is going to happen.
reply
Nostr crossed 1000 users just over a month ago, now it's heading to 1 million
That is hard given the project is grass roots without funding
I dont think of it as decentralized like bitcoin. I think of it more like RAID with websockets. If one relay goes down you (hopefully) have a few more so that you dont lose your account and you are not disrupted
It's an experiment built on taproot and schnorr. That unlocks a certain amount of creativity that fills a gap in the bitcoin eco system
Scaling it will be a challenge, but maybe not impossible, we'll have to see where we are in a year
I think bitcoin will benefit from a social layer. And nostr is one candidate to do that. If people want it enough, it has a chance.
Regarding the hard work of scaling, we need that. I personally think your idea around single-use seals could add a great deal to the existing digital signature infrastructure by anchoring commitments to a time chain, or even check pointing
Couple that with a reputation and trust system that is getting built out, and you have a number of models for contracts and incentives that could create a rich eco system. Time will tell. We are early!
reply
Where can I find more information about those reputation and trust systems?Thanks.
reply
It has some funding from @jack, @fiatjaf, etc. and there are new bounties cropping up often that are incentivizing the most needed feature additions. I think these are a good start towards nostr tackling the scaling problem.
reply
There is no funding for infrastructure
At this point a $5 VPS would make a difference
reply
Which will certainly centralize it.
I don't think that nostr would work well with too many relays. The fact that nostr uses public key cryptography makes it inherently better than alternatives such as Mastodon, where you're screwed if the instance just bans you. With nostr you can take your identity to any of the relays available.
Therefore it just takes one of the relays to accept you. In the worst case where literally every relay has banned you, you can self-host a relay.
reply
If you goto the main nostr github repo https://github.com/nostr-protocol/nostr it says this:
About a truly censorship-resistant alternative to Twitter that has a chance of working
To achieve censorship-resistant would mean, in my opinion, you must be decentralized. Also to achieve an alternative to Twitter you must be able to achieve an at-scale global feed.
reply
For some, censorship resistant simply means no single point of failure - not bitcoin level decentralization. Nostr at the very least doesn't have a SPoF.
Alternative to Twitter does not necessarily imply global scale. It does imply a core set of features and core utility. Regardless, it's kind of irrelevant what Fiatjaf's Vision™️ was originally. It's more interesting to discuss the power and limitations of the current design.
IMO as is, at global scale, nostr will either:
  1. need some kind of hierarchical design where all layers can somehow economically benefit
  2. become fragmented at the relay level on some arbitrary basis - topic, community, cost, format, etc
  3. centralize into a oligopoly
A lot of nostr advocates seem pretty happy with either (2) or (3) being the terminal state for the time being. I don't think that's what they ultimately want though.
reply
I think it is most likely under any technical design for nostr to end up at 3.
It seems that no matter which built-in economic incentives we create for many players to run relays, there will always be capital-rich player that is willing to provide everything for free until competition dies. That's how Amazon and others keep winning.
Given that, what's left is that the core design allows me to have all my data saved, backed up, moved etc. That means I can use whatever big (free) player there is, until it starts misbehaving. Then I can switch.
Might that be enough to make nostr different?
reply
It certainly makes nostr different. The way I've been thinking about it is that nostr isn't so much decentralized as it is less centralized. Less centralized is still a big deal as it changes incentives a lot. Does it change them enough? Will some killer app or experience emerge as result? Will the average internet user pay the switching costs? We'll see.
reply
Scale is a fiction invented by SV to capture and control the market.
It is healthy for nostr if big relays are difficult to run. It helps prevent centralization. By design, no single relay will ever achieve “twitter scale” and that’s a good thing.
reply
It helps prevent centralization.
No it doesn't. A single, very expensive, computer can probably run all of Twitter. There's entities out there who will step in to run those very busy nodes... but only a few entities. That is a clear centralization problem.
Your argument would only be correct if the scale was so enormous that no entity could handle all the load by itself. Which is certainly not true even on the individual computer level, let alone with clusters.
reply
The critical detail is that Nostr clients are supposed to be thick and relays thin. That means that problems should be solved client-side as much as possible, including dealing with relay downtime. So, if you want maximum reliability of communications with your friends, you and your friends would use clients that talk via multiple relays (not necessarily all relays at once, perhaps trying one relay at a time until the message gets through). And each relay would be none-the-wiser.
reply
I think relays will end talking to relay transactions to share the load.
reply
this is a huge computer science to solve if you want to allow data transmission to remain free but also prevent span between relays. See Activity Pub downfalls https://news.ycombinator.com/item?id=21763572
reply
Paid relays are probably a good way to incentivize decentralization, as it incentivizes relay operation - something lacking in Bitcoin nodes.
reply
Importantly bitcoin node operators have pretty strong indirect incentives to run nodes. It’s not as strong as direct incentives, but stronger indirect incentives than say torrent seeders.
reply
Hope we can get a fruitful discussion started from the contrarian viewpoint! Here's the high level concerns and questions:
1. Incentives
2. Security
3. Scale
*The points below are assuming the NOSTR use case to be an alternative to Twitter as stated in the "About" section on Github https://github.com/nostr-protocol/nostr
About a truly censorship-resistant alternative to Twitter that has a chance of working
1. Incentives
  1. In the long run what incentive does a relay operator have to host a public relay (to achieve a twitter like global feed)?
  2. How will the relay operator cover the hosting costs? Assume, not all users are on a lightning standard. How will a normie use this system?
  3. How can we achieve a network effects if most current twitter users are not on lightning?
  4. What happens when your relay is shut down for routing illicit content, even if its encrypted en route thru the relay, its still public from the decrypted client. (kiddy porn, etc...)? Not a great end user experience if you need to play a game of "whack a mole" and keep changing relays when they get shut down or ddos'd.
2. Security
  1. How do you prevent XSS attacks for browser clients? Can we even have secure browser clients?
Warning Due to my incompetence, anigma has security vulnerabilities that allow remote siphoning of your private key. I haven't fixed them yet. Don't use anigma with a private key you're not okay with leaking.
  1. Do you feel comfortable having raw data coming into your phone/web browser via web sockets from random ppl on the internet without having a server virus scan it first? Lots of potential zero day exploits to come in the near future.
  2. If you argue the Alby chrome extensions can help, how many users are capable of running a browser extension, could the extension be dos'd? If a virus routes thru a websocket and dos's your session could they prompt you pay in an endless loop, rendering the browser client unusable (not sure if this is possible or rate limited but worth research)?
3. Scale
  1. How to achieve a global algorithmic feed like twitter?
  2. Will this lead to an emergence of indexing services?
  3. Take the scenerio of an indexing service emerging. Assume the service indexes and aggregates data from multiple relays into a better data structure for global feeds, such as a graph data structure. This could be useful to get info about a "friend of a friends" interest in a topic, such as "Dogs". (vertex => edges). The indexing services might reap all the benefit from the client apps (ad model, paid algorithmic feed) . Would the raw data relaying service get jealous and start censoring the indexer?
  4. What incentive does the relay have to feed data to an indexer, while they are going broke on hosting costs?
  5. If you argue relays will start indexing data then won't it just be a traditional client/server/database model?
The unhappy path
If you fast forward a few months/years could this be the reality we end up with:
  • The relays will end up being cartels and blacklist users/indexers.
  • The relays runners will run out of money because it costs alot to host and prevent spam
  • Once kiddy porn starts flowing thru nobody will want to run relays and the gov start shutting down relays like a game of whack a mole
  • Clients might need to connect to 100 relays to get relevent data. This might render a client app slow and buggy. Would this drain the battery?
Alternative Case studies
  • XMPP
  • Tim Berners Lee Solid Inrupt Project
  • The original Blockstack 1.0 stuff, *2017 era pre shitcoin stuff
  • All the stuff csuwildcat is doing
reply
I think Nostr is just this generation doing what the previous generation did, without any awareness of the history that followed.
Which p2p systems with their inception in the 2000-2010 period survived:
  • Bittorrent
  • Bitcoin
  • Skype (which was precursor of Signal)
Gnutella is pretty much gone. Limewire, gone. Tor and I2P are creaking under the constant onslaught of spooks DoSing it to unmask hidden services, or user IP addresses, or just shut down onion sites. Tox has almost no users. There's a whole bunch I can't even remember their names, Mumble?, but nobody has even come close to touching the total connectivity that the big tech social media have got.
In what material way is Nostr any different to IPFS? A few pieces of metadata that could have been built on top of IPNS. Where's IPFS now? It's only use case seems to be hosting retarded image files that have their hashes stamped on some shitcoin chain or other.
My first instinct was "who will run relays", and then after watching the spamfest in #Nostr "how will this network not be overwhelmed by AI powered botspam?"
  • Bittorrent survives because it evaded DMCA and the piracy community doesn't need incentives because it is incentivised by the FU factor.
  • Skype/Signal/Whatsapp survive because they are supported by big tech companies who run highly available network peers that keep users connected.
  • Bitcoin is the only network that needs no external incentives.
Scaling up a social network system requires the support of phat servers like the instant messenger p2p protocols have that can soak the DDoS.
Funding the running of such high capacity servers is a tricky distributed systems protocol engineering task. We know we can't expect fiat bros to fund it like the chat systems, as we privacy- and decentralisation-fans have been sufficient in numbers to demand E2EE.
The initial paid relay services that have been proposed are still decentralised but trusting them seems like a likely bad idea, and if it gets popular enough, there's gonna be a lot of people zapping sats and getting no actual caching/relaying.
I might be biased, but I think that Indra's network-internal decentralised service access charging is the only solution that will work long term, and it will support all of the other protocols that are suffering due to lack of capacity.
Indra relay operators will have the option to run any number of decentralised p2p services attached to their relay's service ports, set a fee for access to it, and use those fees to scale up the capacity of their nodes, farm them out into failovers and relays, as well as deploy more Indra relays with popular p2p services attached to them. No need to ask a relay operator to store lots of data, they will just set their fees at a rate that allows them to continue to upgrade their systems to cope with growth in traffic.
reply
I will take a stab at describing how to achieve a global algorithmic feed.
First, let's acknowledge how weird it is that we cannot choose the algorithms that curate our feeds. Imagine that your phone came with a set of pre-installed apps made by a manufacturer, and you couldn't remove them or install any others. That is how phones used to work before Apple created the App Store.
Mobile apps are good analog to algorithmic feeds. The fact that people do not pay for curation algos today does not mean that they won't in the future. My money is on algo stores becoming an important piece of Nostr's ecosystem.
The marketplace could work in the following way:
  1. Algo designer submits his algorithm to the market & sets the price.
  2. The user chooses an algorithm and pays for it with lightning.
  3. The marketplace keeps track of user satisfaction from each algo, similar to the App Store's rating system.
  4. Algo designer shares part of the revenue with relay operators to ensure access to data.
Many users will likely prefer to watch ads instead of paying, and ad networks will also emerge over time. I suspect there will be something like Google AdSense, catering to algorithm designers, as AdSense caters to website owners. This way, individual algorithm designers would not need to run their ad networks to monetize with ads.
reply
I put my contrarian critique here and surprisingly it didn’t get interacted with. So i’ll leave it here too #131285
reply
I personally think there are lots of work to keep doing... But it had a good start and there is margin to improve...
reply
As with anything apathy is always an issue. Most people don’t care about digital sovereignty especially this current generation of the internet who got hooked on free stuff and don’t mind if they are exploited for that “free” stuff
reply
I think it's the wrong perspective to think nostr has a need to scale. Nostr is tailoring itself to a specific sub-group of people dissatisfied with the current state of twitter (and other platforms). It's not trying to compete for 'market share'. Thinking of nostr in that way is like thinking bitcoin is trying to compete on the NY Stock Exchange. It's not applicable.
reply
Seems like we are back to endless debates from 2016-2017 about "Bitcoin does not scale"...
reply
right! Why does it matter if nostr does not scale? Maybe it’s good to prevent large relays from centralizing the network.
reply
need to make running a relay as easy as running a bitcoin node. tutorials and stuff wise
reply
Sure, that'd be cool... But how is the load supposed to be spread across multiple relays? Because at Twitter scale, one single computer won't be sufficient.
Things like Freenet use sharding to solve this problem. Maybe I'm wrong. But I haven't seen any discussion of how nostr could do this.
reply
You could run an indexer relay that pulls from many data relays to fulfill a query.
reply
Umbrel are cooking something up about Nostr. I am assuming it will be a plug and play application on their OS.
reply
"Exactly" what was missing to a platform like Umbrel to make even worse the LN nodes network: a bunch of noobs running LN nodes together with nostr relays, on a shity RPi.
What could go wrong, right? Oh yeah sure that, little shity RPi will be so strong that can handle all the traffic.
reply
I repectfully disagree with your statement because we cannot assume that if someone is using Umbre, that it is on a raspberry pi!
With the surge in price of Raspberry Pi and limitation, I am looking into getting a mini computer and run several countainers with Umbrel running on each. You can easily have your BTC and LN node in one container, have a personal Nostr relay on a seperate container and have your Home Automation on a third one.
Umbrel just makes it easier to configure your containers.
reply
That's OK. But most of the clueless users don't know how to do that. You are the exception.
reply
What could Umbrel possibly add that isn’t already available from existing clients or relays?
Related question: has Umbrel ever developed anything original?
reply
which other OS has the possibility to create your own relay with the ease of using an application?
reply
Last time I heard about them on the topic, they were against the idea of having nostr and lightning hosted on the same server... I'm curious to know what will come out of this.
reply
Better but not ideal.
reply
I have read nostr.how and have a question. The site says that relays do not sync data with each other - so how does content from popular accounts get spread over the network? Do i need to connect to the same relays which are used by the accounts i follow or how does that work in the end?
reply
Yes, that's exactly how it works for the most part. If I want to see your posts, I need to connect to at least one of the relays you published them to (assuming they haven't deleted your posts).
reply
Thanks for your explaination! So you have a puper popular user posting through your relay, milions of people will connect to your relay and if you can not handle the load, the userexperience will be bad.
Could become a problem for censorship resistance when somebody like Robert Malone gets censored on one relay and he needs to find a new one that will host him and also can handle the load.
P2P syncing with users who follow the same account would be cool?
reply
Define decentralization and define scale for us. Both are loaded words and I have a feeling you have specific definitions in mind.
reply
Nostr can easily develop the same problems email has
Email used to be an open protocol where anyone could run a relay and anyone could host a server
Emails were forwarded around the network on a voluntary, optimistic basis, and it worked well
But email didn't have a good way to prevent spam except by blacklisting ip addresses, which turned into a cat and mouse game, and eventually instead of massive blacklists, companies realized email worked fine on a whitelist basis where they only send or receive emails from certain big providers
The exact same fate hangs over nostr unless better incentives are built around it, e.g. paid relays, and that will probably only work if users are okay with a "pay to post" model
In four years nostr will either be dead or a paid service
reply
I think this goal:
a truly censorship-resistant alternative to Twitter that has a chance of working
...should be second to:
notes and other stuff transferred over relays
And then, the twitter use-case gets built on top of that.
If you do that, then what you can do is split all content into two techniques for sharing:
  • recent content
  • archived content
And split the network into the following architecture:
  • clients - very similar to now, except they have three new optional jobs baked into the standard
    • do PoW, maybe per character, per post or based on posts per unit-of-time.
    • deletes take LOTS of PoW, and climbs with age of the post.
    • every so often they backup a material chunk of all content that they've published. posting that to an indexing node using content-addressable techniques and more PoW.
    • query an indexing node for the pointers to data about the user. Optionally, queries require PoW.
  • relays - expensive VPSs, optimized for short-term message distribution. Nobody queries this. People just post and subscribe to anything recent (~6h or ~12h, config'd per relay) and going forward. The cost to host this would be much better than now, and power wouldn't accrue as quickly, because it just has to handle throughput, not throughput and storage and querying.
  • indexing nodes - Similar to relays except it's just a map between users and pointers to content hosted somewhere, using content addressable techniques. All signed by the user. So, 20 posts, get bundled together into a block-like file format, and the content addressable information
  • long-term storage - This is where history lives in an archived state. Adapters can be written, so that users can host just their own content, or pay somebody else to do it for them. But the indexing nodes get queried, and maintain the pointers. Want to do a delete? Update the indexing node with an entirely new block that replaces the old one.
This transfers the onus to the client to download blocks of data, parse it, then cache it it locally and efficiently, rather than taxing relays with sloppy/duplicate/lazy pulls or expecting a relay to have infinite retention (they won't, I'm 100% sure they won't). The clients that do this job efficiently will win, because they won't take nearly as much bandwidth.
Required Reading:
reply
Heyo, you have a very similiar idea to mine (which I posted as a reply to OP), although I didn't think about a separate component for storage, that could work well. I'm a little confused as to why more people haven't proposed this architecture to Nostr, the problems with the current implementation are prety evident
reply
I've been using Nostr-based clients for a couple of days and it's very interesting to see it working, there's no denying about that. I do worry about its capabilities for scaling in a decentralized matter as well, though.
The biggest thing that I don't understand is why relays don't talk to one another. I feel as this should make everything a lot faster and, though it requires more bandwidth and processing power, it could work amazingly if a lot of people ran relays all around the world (basically like LN nodes).
Data is handled in a weird way because you still don't own your data if you don't run your own relay and publish to it (and if you do, you have to convince people to connect to it for them to be able to see your posts), so a relay could single-handedly wipe all of your data in an instance and it'd be forever lost, right? That's what I'm getting right now.
Nostr is a very interesting idea and the fact that a lot of people are investing in it is a great thing IMO, but I whole-heartedly believe that if data isn't dynamic, being passed from one relay to another and never stored in a single location, it won't work in a global scale.
Here's my pitch: I think people have been thinking in two components for a while now and that may not always be the best solution. In the case of social media, the end user running the client wants to see everything they query very quickly, they want to be able to see posts from anyone around the world if they search for it (regardless of what relays they're connected to) and they want data security, so that their data isn't always stored in a single location (a little bit like torrents). So we need:
  • Clients
  • Relays
  • Indexers
Indexers being nodes that query known relays and keep track of what public keys are available in what relays. Clients query Indexers to know where to look for stuff and then query that Relay to actually get the stuff. If relays talk to one another, they could keep connections alive with other relays like graphs, and if a Client asks for data from a relay that it doesn't have an immediate connection to but has other relays in common, it doesn't need to open more connections, as that seems to be terrible for slowness in the current protocol (I saw a guy saying that they tried connecting to 40 relays and things got veeeeeery slow).
I'm probably not technically proficient enough to really look into this and how it'd work, but I do believe there are better solutions to the problem than what Nostr currently offers
reply
reply
so far nobody has been able to successfully answer my original question of how nostr achieves:
a truly censorship-resistant alternative to Twitter that has a chance of working
so inclusion... NOSTR SUCKS lol.
reply
Check strfry, I believe it has a mechanism to sync relays.
If optimized, a relay should be able to do Twitter levels of traffic by itself.
The fact is that hardware improves exponentially faster than the size of humanity, so we are probably at the point where single-node Twitter is feasible. That's a first, and what may change things.
reply
it just needs to be better than twitter
reply
In the long run what incentive does a relay operator have to host a public relay (to achieve a twitter like global feed)?
  • Pay for relaying traffic. This is an obvious use-case. Having a dumb relay for hole-punching between NATs is very useful. There is already a market developing for this.
  • Pay for storing data. This is another obvious use-case. Relays provide access to websocket native data storage with non-custodial identity. That is actually quite incredible.
  • Pay for pubkey -> name resolution services. This is already quickly becoming a thing. It's very useful for verification and routing.
  • Subscription to a curated platform. A combination of the above, where you are provided a social media experience as a service. Essentially Twitter / Mastadon / etc with a subscription.
  • Pay for API / Integrations. Also quickly becoming a thing. It's simple to setup a chatgpt bot on nostr and charge money.
  • Mining / selling data. We are all aware of this revenue model. The interoperability between clients-relays will make aggregation much simpler.
How will the relay operator cover the hosting costs? Assume, not all users are on a lightning standard. How will a normie use this system?
You don't have to use lightning.
How can we achieve a network effects if most current twitter users are not on lightning?
Nostr uses schnorr keys for identity, meaning that they also support lightning directly (with taproot channels) i.e your nostr private key can also sign lightning invoices.
Lightning invoices already have a slick integration into nostr (see zaps) and nostr profiles include lud16 payment addresses.
What happens when your relay is shut down for routing illicit content, even if its encrypted en route thru the relay, its still public from the decrypted client. (kiddy porn, etc...)? Not a great end user experience if you need to play a game of "whack a mole" and keep changing relays when they get shut down or ddos'd.
This is a common red-herring that you can bring up for any platform. Relays need better moderation tools but they are actively being developed.
How do you prevent XSS attacks for browser clients? Can we even have secure browser clients?
Same as any other key custody solution: hardware and software signing devices. See nos2x.
Do you feel comfortable having raw data coming into your phone/web browser via web sockets from random ppl on the internet without having a server virus scan it first?
This is another common red-herring framed as a nostr-centric problem. Write good code and sanitize your inputs.
How to achieve a global algorithmic feed like twitter? Will this lead to an emergence of indexing services?
Most likely.
If you argue relays will start indexing data then won't it just be a traditional client/server/database model?
With decentralized identity and client-native data replication, yes.
reply
время покажет пусть будет... или останется как биткоин или уйдёт как $ в прошлое...
reply