Let the hashrate arms race begin.
See, proof of work costs. It requires electrons and uses up a resource for the time they are operating.
Indra's design anticipates the spam and sybil attacks by changing the way that the hidden service is established.
In Tor, basically you make a standard 3 hop circuit to a middleman, who then forwards the packets to the 3 hop circuit to the hidden service. I don't know how exactly they have changed the protocol since I first was studying it back in 2007-8, but they keep 6 rendezvous circuits open and these nodes advertise that they are serving to access that hidden service address.
Indra takes a different approach. Because it is using source routing, no node along the path knows anything and has no discretion, they either deliver, or they don't.
Indra uses a "routing header" which consists of the set of addresses to forward to and the keys to the sessions being spent from and who will forward the message.
Hidden services are found on Indra via the p2p gossip peer database, from advertisments sent out when a relay receives a hidden service introduction message. The IP address is public, but instead of acting as a middleman for the whole time the client is connected, it only forwards the users connection request (containing a return routing header) via a one-shot message from the hidden service, using its routing header, and then using the supplied client return routing header a "ready" message is sent via a two hop prefix to prevent unmasking via controlling the relays in the last (3rd) hop.
Once the client receives the ready message, they can send a request to the server with an attached return routing header from the ready message, plus their own two-hop forwarding layers at the top of the onion, and precisely the same for the server.
There is sybil and spam to contend with in mixnets, and the reality is that their clunky, very old telescoping bidirectional relay connections make the network very vulnerable to spam since hidden services then effectively have a bottleneck in the bandwidth capacity of its rendezvous points. That's why they have had to make this proof of work challenge.
If you ever have dipped into the ocean of hidden services, maybe it is more accurate to describe it as a "pond", you would know that pretty much all of the dark web sites have intense proof of human tests, they make you scan the address for fragments of the hidden address, really nasty letter recognition captchas, and challenge authentication using pgp encryption.
Tor's hidden service network will continue to limp along thanks to this update, but you have to really emphasise and point out that this means that the users are effectively paying for their access to the hidden services.
But there will be an arms race and in their proposal: https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/327-pow-over-intro.txt they basically admit they aren't making a useful defense for those big attackers who are confounded by Tor.
The subject of adding money to serve as a spam limiter and a sybil countermeasure was being discussed back in 2006 on the Tor mailing list (you'll find me there if you dig it up).
The reason why Tor (and i2p, and others) have these telescoping circuit opening system is because source routing can be a problem in that there is no direct path between the client and the relays for informing the client about the congestion, and send a message out that gets stuck because it had a hop in it that was being flooded. It can and will happen inevitably if you randomly select paths that at one moment everyone picks the same point in their circuit and nobodys message gets through.
Source routing needs a privacy-protecting payment pathway to provide a means to spam and sybil protection, and fund the operation of the relay. This is why HORNET never got very far, source routed anonymising networks have a real challenge in managing congestion.
Indra will be using frequent reports as part of response messages that tell the node how loaded the exit point is, and the p2p gossip network DHT peer database will also have frequent updates of peer records to decrease the use of heavily loaded relays in favour of ones that are not so heavily loaded.
Tor was forever doomed to be unable to scale well, because of rendezvous bottleneck, and the general issue of managing congestion. The only viable solution to these problems comes in the use of a privacy protecting micropayment system, where users pay a little ahead to relays for traffic.
LN payments are routed the same way as Indra payments. Those invoices are essentially onion packets, and as the designated peer receives the message sent to them, they then discover where to forward it. It can be private if the nodes are publishing their LN peer identity key, and AMP/keysend is used. This is why payments can fail. The path finding algorithm cannot always correctly choose a path because the nodes are not publishing their balances.
Anyhow. Haha.
It kinda amused me seeing they devised a PoW that uses iterative math functions to prevent ASICs being built. I made a simple one that strung together series of multiplications and then divisions and a final hash, theoretically impossible to accelerate because it depends on long division, which proceeds one digit at a time.
I'm keeping an eye open for that eventuality, it will be interesting to me because I designed one of these algorithms myself.
reply
The algorithm has been proposed by tevador, who was a co-creator of RandomX used in Monero in 2020: https://lists.torproject.org/pipermail/tor-dev/2020-June/014358.html
RandomX itself is based on the idea to generate random code based on the last block hash, so you can't predict an algorithm to benefit in some way from putting it in silicon, design specs can be found here: https://github.com/tevador/RandomX/blob/master/doc/design.md
Here is a great write-up on the development from RandomX to Equi-X as a DoS protection and why certain design decisions have been made: https://github.com/tevador/equix/blob/master/devlog.md
And finally here is the discussion on Git of the potential benefits and drawbacks of it being implemented: https://gitlab.torproject.org/tpo/core/tor/-/issues/40634
reply
I think this is stupid, attackers will have ASICs and users will not, so either the difficulty needs to be so high, normal users can't use it, or the difficulty is low enough and it means nothing to attackers.
reply
Please read up on RandomX and the history of Equi-X: https://github.com/tevador/equix/blob/master/devlog.md
Both generate "random" code based on the hash of the last block and you have millions of algorithms which you simply can't put onto silicon, since actually a modern CPU/GPU is simply the best ASIC. RandomX has been out four years now and hasn't been "broken" by any kind of ASIC.
Sure, an old CPU will take x10 or x100 the time to solve, but this is acceptable to mitigate DoS, when an attacker needs 1 second and someone with a 15 year old Pentium needs 100 seconds. Captchas used on hidden services take mostly more with several failures but don't protect the service like this solution does.
reply
𝐇𝗼𝐰𝐝𝐲 𝐝𝗼 ? 🀠 πŸ‘‹
reply