pull down to refresh
0 sats \ 0 replies \ @orthzar 16 Mar 2023
There are two complementary solutions:
- Put a limit on the size of encrypted events (e.g. 500KB). To prevent people from side-stepping the size limit, relays would limit the total number of encrypted events stored on the relay. Clients should be distributing the load of their data across relays in order to eliminate central points of failure.
- Automatically delete encrypted events after some time-frame. A NIP would be needed to specify how relays would inform clients of that time-frame.
Relays should not be unlimited free storage of arbitrary data anyways. Both relay and client implementations must get smarter about how to use relays. Relays can force client implementers to get smarter by putting limits in place.
Another good limit would be total number of connected users. 100 relays with a million users each is recipe for failure. Whereas a million relays with 100 users each is a recipe for resiliency. Of course, the users would come-and-go at will -- no registrations needed.
reply