Definitely could be articulated better on the site. A lot of technical details that are beyond most of us.
However what I grasped from it is that it wishes to overcome some limitations, speeding up query & download times on Nostr, which we see can be an issue with some clients.
Not only that it offers a credible alternative to IPFS and avoids the unnecessary full & duplicate download of files for numerous clients and relays.
——
“This will unlock a new tier of speed for a variety of decentralized apps that require file directories.”
“Without cryptographically verifiable chunking, users would need to download the entire file to verify it — rather than verifying each chunk immediately after the download. Downloading a very large file before verifying it wastes a lot of time and bandwidth if the file turns out to be incorrect.”
Hornet Storage operates beneath Nostr, storing relay data as Merkle DAGs/Trees in its key-value database.
The relay manager lets operators view data usage by service and select which services to host.
e.g. data is tagged with a credential indicating it’s service type, such as “nostr note” for posts, “nostrtube” for a Nostr YouTube video, “stemstr” for a Stemstr music note, and “git” for a got repository folder, among others.
Browse Nostr without HTTP. Connect to relays directly.
Hashed organized storage to avoid data duplication, with compressed Merkle branches and batched queries for syncing many missing leaf hashes in a request.
Warning: Relay is currently in development phase. It is not yet stable and not recommended for production use.
This is interesting but I'm struggling a bit on what it actually means or what use cases it enables.
Definitely could be articulated better on the site. A lot of technical details that are beyond most of us.
However what I grasped from it is that it wishes to overcome some limitations, speeding up query & download times on Nostr, which we see can be an issue with some clients.
Not only that it offers a credible alternative to IPFS and avoids the unnecessary full & duplicate download of files for numerous clients and relays.
——
“This will unlock a new tier of speed for a variety of decentralized apps that require file directories.”
“Without cryptographically verifiable chunking, users would need to download the entire file to verify it — rather than verifying each chunk immediately after the download. Downloading a very large file before verifying it wastes a lot of time and bandwidth if the file turns out to be incorrect.”
ELI5?