52 sats \ 5 replies \ @justin_shocknet 17 Aug \ parent \ on: Highly available Lightning node cluster setup guide lightning
Yea I meant something like Ceph over it
Ceph and cnpg (Cloud Native postgres) would be a nice fit.
Can you wrap that single server s docker container? If so putting in the kubernetes would solve many issues... I will check OPs repo to get more info, but you have a nice idea there.
Can you wrap that single server s docker container? If so putting in the kubernetes would solve many issues... I will check OPs repo to get more info, but you have a nice idea there.
reply
I did consider trying with bbolt on top of Ceph, but since etcd is already implemented in lnd it seemed like the more native approach to use etcd. But I am planning to compare this to a setup with Ceph and do some benchmarks.
reply
Cool i'll be following, its been too long with LND as the only implementation thats somewhat production ready and not having HA or even an squeel backend... would also like to know more about the cluster awareness so a passive node doesn't broadcast something
reply
LND has actually had support for leader election for at least 3 years already. Some documentation on it can be found here: https://docs.lightning.engineering/lightning-network-tools/lnd/leader_election
But during my testing I did manage to get two nodes to become active at the same time, which is bad. I described it in this issue: https://github.com/lightningnetwork/lnd/issues/8913
This was an LND bug, where it would not resign from its leader role. etcd was working as it should.
Two weeks later the bug got fixed with this pull request: https://github.com/lightningnetwork/lnd/pull/8938
With the patch applied,
healthcheck.leader.interval
set to 60 seconds and cluster.leader-session-ttl
set to 100 seconds, I could no longer produce a situation where multiple nodes were active at the same time.With this configuration, each lnd node creates an etcd lease with a time-to-live of 100 seconds. This lease is kept alive at intervals of one third of the initial time-to-live. So in this case it is kept alive every 33 seconds. When a node loses its connection to the rest of the cluster, it takes 27-60 seconds to initiate a shutdown. And it takes 66-100 seconds for another node to take over. So in this configuration there is no room for overlap, so no chance of two nodes being active at the same time.
reply
Great drop ty
reply