I had a nice channel open with a random pleb for 118 days; no issues. Today I noticed the channel was force closed a week ago. My node says it was a ‘Local Force Close’.
However, I did not initiate the closure, and my logs don't show any downtime or unusual activity on the day of the closure.
I also checked the peer's profile page at Amboss.space, and it shows their node had 36 force closed channels in addition to mine on that day – an unusually high number of closures.

My questions (naturally) are...

  1. Under what circumstances would a node auto force close a channel? Is it possible that the peer had some kind of failure which caused my node to initiate the closure?
  2. Is there any way to learn what happened? I don't know the other peer so I can't ask him.
Welcome to LND when channels close at random. Other reasons this happened include
  1. A stuck HTLC in your channel
  2. Tor network inefficiency
reply
Good answer
reply
Forgot to mention… my node is running on Umbrel over Tor.
reply
Tor is very naughty when it comes to long living connections. There is no countermeasures for failed hops in the path, and this can be caused by congestion due to the random selection of rendezvous points, which also provide you with 6 potential candidates. At one time I used to try and use SSH over it but the random timeouts (circa 2005) just made it unbearable.
I don't know what any LN node's configuration regarding this is but I don't think that any of them communicate to Tor using the RPC, just use its SOCKS5 (or 4a) proxy and at best manage the creation of hidden services. There is no "make new circuit" interface in them, as far as I know, and given enough time, 5, 10, 30 minutes, tor just decides "oh, snap, your connection is b0rked", and then it finally creates a new circuit to use for the connection.
If there was some way to trigger a new circuit for a given endpoint, either within btcd/lnd/neutrino or something more direct inside the Tor API, after a timeout starts, that isn't an absurdly long amount of time, maybe it would be better, but the problem tends to be simply that an anonymising network has few ways in order to mitigate congestion.
I was just thinking about it today, the problem of rate limiting. In my work on Indranet I will be applying a strategy where exit messages (like hidden services but endpoint not behind a rendezvous) include a little hint that gives a number that indicates the level of traffic the node is handling so it can select a different path when an exit node gets congested.
In addition to this, nodes will propagate this value across the p2p network, in their state advertisments. In response to these data points clients will be able to recognise a sustained rise in congestion and avoid nodes temporarily in such a state.
One of the reasons why Indra will be better than Tor or I2P is because of this - by using exclusively source routing, it is also hard to attack the network consensus (Tor has been under sustained attack at the centralised network consensus servers for most of the last year). Every single request your servers send out over it will travel over different paths every single time, the law of averages says that more than half the time you won't hit a speedbump anyway, and if you do, the node will detect a failed delivery and start a diagnostic to identify the failing node and avoid it for a while until its network state via ping or p2p network updates appears to be back up.
Honestly, I totally recognise and affirm the desire to anonymise but for a few million sats it's really not worth the trouble. Unless a SWAT team is gonna break down your door because your home connection was detected as running a cryptocurrency type server off it I wouldn't bother. For several bitcoins worth of channels, I'd want to have some pretty solid security, and probably works out better to just use a little collection of different VPNs instead.
Tor is a terrible network transport for latency and poor congestion control. It is a very poor fit with Lightning.
reply
Shot in the dark: maybe your fee negotiation policies were out of sync. Lightning nodes that run on "blocks only mode," and many that run on neutrino, perpetually think the feerate is 1 sat per byte. If the blockchain's actual feerate rises above that, they will refuse to negotiate with their peers to use a higher feerate for htlcs and for updated force closure transactions. The refusal to negotate a higher feerate can lead other nodes to force close on them.
This is also why force closure transactions typically use a fee about 3 times higher than the current feerate. The high fee gives you time to negotiate or initiate a force closure using the force closure transaction you created with the previously-negotiated feerate. Even if fees are spiking, they are unlikely to get 3 times higher than the previously-negotiated feerate before you can negotiate again or force close on your peer.
reply
Dig into the logs, the answer should be there.
reply
Any idea what I should be looking for? I've scanned the log line by line but nothing jumps out at me.
reply
Look for the channel point in the logs. Dig out anything related to the channel that was closed.
reply
Hmm, nothing. I tried searching for the Channel Point, the Channel Id, and the peer's public key.
reply
Maybe the node of the other Pleb was stuck and he was not able to get it back running. Most likely he had to use his SCB-File and all channels were force-closed.
reply
Was is a small channel? Did you lose many sats?
reply
It wasn't a large channel. There were about 1 million sats on both sides. But wouldn't those funds (minus tx fees) get swept back to my on-chain wallet after the time lock?
reply