I have been busy all this afternoon doing a massive revision of the Indranet White Paper, which can be found here:

https://github.com/indra-labs/docs/blob/main/whitepaper.md https://github.com/indra-labs/docs/blob/main/whitepaper.pdf

The last few weeks have brought a massive amount of clarifications, refinements and deletions of ideas that have been inspired partly by the implementation and partly by the whatever it is that causes new ideas to pop into my head.

Highlights

  • Hidden services! Yes, and the best part is that implementing it in Indra's design is a lot simpler than I initially thought, due to its source routing. And even better, because of the way it constructs routing headers, instead of passing through 6 hops like Tor, the paths are only 3 hops long, and the client constructs the return path.

  • Exit tunnelling - I initially thought that it would be better to make Indra focus on allowing relay operators to offer client-side anonymity to sending messages out on distributed networks only (Bitcoin, LN, Bittorrent, IPFS, etc) and this will still be easy to do, adding clearnet exit tunnelling is just a matter of placing a Socks5 proxy on the end of a local port that forwards requests to that port. Relay operators will be able to allow blanket relaying, or only provide relaying to a limited set of protocol types (ports).

  • More clear ideas about how applications will interface with and send traffic out over Indra in general.

  • More details about interoperability with bidirectional socket protocols like WebSockets and push messages.

  • Congestion mitigation - there is now a specification for the several measures that allow the network to dynamically adjust its use of relays so that they can enforce their bandwidth limits without identifying users and without.

  • Failure recovery - more details about how nodes will avoid unpleasant delays in traffic proactively. Also how to deal with intermittent uptime such as PCs and mobile devices as a means to improving the anonymity of a client's traffic.

  • Sybil attack countermeasures - some who are following more closely may know something about this, but the TL;DR is the use of a kind of "fidelity bond" time locked spend, a TXID that proves the age of a relay, and various ways to combine these data points to limit the ways in which griefers and general villainy from abusing the protocol to make money without delivering service.

Recent Development Activity

Aside from lots of inspirations of late the implementation is moving forward nicely, all the handy little helpers have been built to simplify constructing and delivering onions onto a simulated network composed of goroutines and channels serving in place of network connections.

The immediate next piece of work will relate to path failure diagnostics, and then most likely move towards starting the building out of the probabalistic path selection algorithm, meaning the first scratchings related to the HODL Consensus in the path selection code.

The Super Shadowy Sponsor has been very busy too, equally manic as I have been in the last few weeks, we now have the basis of a full multi-platform binary and Docker release system, as well as the initial scratchings for the actual application server executable.

related

Big things are coming. They can only smell it

Great job! I think Indranet is very exciting stuff.

How long will development take?

We are working as fast on it as us two amateur devs can manage. Some more money might help things along, a little bit, though too much might actually be counterproductive.

I think probably a somewhat more seasoned protocol/server dev than me could probably help me advance faster by eliminating the newbie architecture mistakes that consume my time but end up being scrapped a little later on. That would be money well spent if we could get that kind of support, helping fan out the work a little and improve the architecting, and it would be equally valuable even if it was just voluntary/self funded.

Developing new systems is a little bit like walking around in a pitch dark place you don't know. It's very easy to get over excited and venture out in a direction and wind up slamming up against a brick wall, and then make it even worse because you deleted a bunch of work you thought wasn't important, and then either can't find it or it was written so that you can't easily correct the error.

I still think I'm on target to have some form of working testnet by the end of next month, but I honestly don't know how big the pieces are that are required to get to that point. Fingers crossed, not too big.

17 sats \ 5 replies \ @ama 24 Jan

I think you should consider using a version control system to maintain and control your code and organize its development. I'll make sharing the code with other devs who might help, and/or publishing it later on, much easier, as well. BitKeeper is an excellent choice for that.

I'm not that terrible with git though. Usually deleted code now ends up in the "shelf" in my intellij VCS interface at least until I next wipe my filesystem and start again. But haha, no I've managed to keep all my data together now for several years running.

17 sats \ 3 replies \ @ama 24 Jan

Ah, sorry for being an arrogant prick, then. Since you said

[...] you deleted a bunch of work you thought wasn't important, and then either can't find it or it was written so that you can't easily correct the error.

I assumed you weren't using (or were even familiar) with version control systems. I apologize for that.

Haha! No offense taken!

Yeah, some days I am at something for hours and I get to the end of a step and I've walked the wrong direction entirely. I still have many old shelvings in my files now but usually I just rewrite it a lot simpler and faster than the wrong version that I shelved anyway. But that backup soothes my nerves a bit anyway.

17 sats \ 1 replies \ @ama 24 Jan

I see, but my point was that all that "wrong" code is also worth preserving in your CVS repository, in my opinion, because it might become relevant later. 😂

Ah yeah, well, that just happens when more than a day passes between changes. I generally make about 3-5 commits a day. I was talking about those irritating times when I try to make a bunch of changes and it just gets too confusing for me to debug and I start again. Usually in the process I learn something about what I'm trying to do and the second try is really smooth. Those errors tend to get lost, but now they end up on my disk in the "shelf" storage in Goland.

Ahh I know the feeling all too well! Wish my development skill were strong enough to help

deleted by author

That's a clearnet "Bob" on the right hand side of that diagram, not a hidden service.

You can clearly see that there is two hops to the Rendezvous point and another three hops from there to the hidden service.

From here:

https://community.torproject.org/onion-services/overview/

In Indranet, the randomly chosen intermediary does not do routing but only delivers regularly updated routing header packets to deliver to nodes requesting them, and these requests run over a standard 5 hop circuit (out and back with two in between), hiding the location of the client, which then can use this packet at the front of their messages to the hidden service.

The header packet provided by these nodes, which will be called "Introducers" contains the forward path, the first, second the destination hops, wrapped in encryption that only the first hop can decode. Note that this outbound path is paid for by the hidden service also, but the client adds and extra 3 hops which provides the hidden service with a return path for the messages.

It's good to have people asking meaningful questions, since this explanation made me realise that as it stands, alone, the client can construct return path that could unmask the hidden service's real IP address.

The first thought that pops into my head is that this means the exit must prefix its return path BEFORE the provided return path.

So actually, Indra winds up with 9 hops in the hidden service circuit if that is how it will be implemented.

It may work out better since it's 6 hops versus 9, to go with the same basic model as Tor. That was how I originally envisioned it, but this new idea has a big hole that I didn't think of. It might not require 3, but only two hops in the server's first part of the return header, not the full 3.

Those seem to be the two options with the source routed connections, 8 hops or 6 intermediated by a rendezvous point.

The difference between them may not be so great as to make any real effective difference.

Oh, of course, 6 sounds better than 8, but on the other hand, the rendezvous point's load level may cause congestion to these packets as it is busy also generally relaying traffic around.

It seems to me like the alternatives need a suite of tests to determine what is better. And there may be another idea yet that gets around this problem, still hiding the last two hops of the return path but allowing a shorter overall path. Nothing occurs to me right now thinking about it.

The main advantage of the 8 hop path would simply be that the rendezvous traffic would not be as dependent on the responsivity of the rendezvous points, since they would be probabilistically selected from the nodes the peer knows are not too busy.

Thanks for the detailed replies! I deleted my comment asking about 3 vs 6 hops because I remembered that hidden service routes do have more than 3 hops in between (while routes to clearnet destinations are 3 hops). Glad it ended up being a helpful question anyways :)

Sorry that was a bit of a stream of consciousness. After explaining the problem to the Super Shadowy Sponsor guy I decided that even still the notion of introducers serving up new forward paths is still better overall.

The reason being that at any one time, all of the rendezvous points chosen by the hidden service could be very busy, not only with the hidden service traffic but also the general network relaying traffic.

And the sum bandwidth capacity of the chosen rendezvous points may not be nearly as high as the exit node can handle. Relays can advertise claimed capacities to the network, but this can be lies, whereas averaging it out over the whole network puts the inbound capacity at the average of the network as a whole, which is likely higher than a worst case rendezvous point relays sum capacity at a given time, by definition. Latency is a major enemy in my design thinking. I want to create a protocol that can become as ubiquitous as TCP/IP itself, which is something of an unfair competition, but it helps me evaluate design options under one razor that can rule out bad options quickly due to their latency cost.

The network would not be able to quickly adapt to this, as introducer points can only have their updates at the speed of the p2p DHT network's propagation rate. So until the hidden service changes its rendezvous points, traffic to it could be limited by an unfortunate choice of rendezvous points.

In contrast, the 8 hops, 3 forward provided by the hidden service, two first hops back by the hidden service, and 3 more including the last layer for returning to the client, can be chosen by both sides out of less busy relays based on both the p2p DHT network propagation of congestion information, and the direct data gathered via onion circuit paths about the level of utilisation of each of the relays. In fact, this information would be a logical component of the chatter between relays, reporting to each other their utilisation level.

The gathering of such data on an anonymity network is full of gotchas related to leaking contemporary data about the exits, especially, that a client is connecting to. As such, only exit relays used by a client report their current utilisation value, but because the client is constantly chosing new exit points for services that have many relays providing, this traffic can update the information of the client about the recent congestion state of the relays and feed into their probabalistic selection of nodes for any given message.

So, overall, I'm leaning very heavily towards the 2 hop first layer to be provided by the hidden service, and the 3 hops provided in the outbound message for delivering the return, because it fits in better with the congestion mitigation strategies that provide a stronger guarantee of low latency for users.

I originally wasn't going to focus much on this facility in the protocol, but when the "oh but they could just deliver short time valid outbound paths" thought occurred, I got all excited because it seemed at first like a better option than static and slowly changing rendezvous points forwarding traffic between the endpoints. But I think that in terms of reliability and latency minimisation, the 8 hop idea works out better.

A little update, the hidden service protocol that I first came up with yesterday opens up a vulnerability of an evil client or evil hidden service to control both of the intermediary hops in the routing header. So now the two sides add a two hop routing header of their own that puts two nodes between them and the first provided routing header.

The introducer acts as a go between, receiving a routing header request, and then sending a request back to the hidden service, which then returns with a new header, and a return path to deliver the next request back.

Also, introducer nodes charge a high fee (maybe as high as 10x a typical relay fee for an average message), and the hidden service delivers a new routing header in the response messages to use, and the routing headers expire after an hour since the last message to keep clients from trying to grief the introduction messaging process, to, I suppose, try and prompt traffic between introducer and hidden servvice.

It does mean more latency, of 10 hops instead of 6, but it can minimise the risk of congestion at the rendezvous points, while providing bidirectional anonymity.