pull down to refresh

most hops [are] succeeding successfully
My favorite sentence
reply
Great technical hero's journey! I hope we can eventually get to a point where mobile nodes don't need to do so much outsourcing, but based on this blog post it seems unavoidable for the time being.
We're interested in combining probing data with gossip data to create a reinforcement learning AI that can hook into our prober to continue learning from, with the plan to take the trained model and ship it to end users. When trampoline routing comes along, this may be very useful to us from a reliability and privacy standpoint.
AI? vc swipes right
reply
I've thought about it for quite awhile tbh, and this current AI wave is just swimming in garbage LLM stuff, so it's not the hotest thing anyways lol. But I do think there's something in RL for Lightning, espeically if you have the data and continued learning. At the very least it can be passive until it starts being correct most of the time.
reply
I'm really excited to see Mutiny digging into RL attachment strategies, though I'm less optimistic about it being useful for mobile users. Generally speaking, end users will want to minimize their on-chain footprint (ideally 1 channel), so there's not much insight to be gained. LSPs on the other hand will be big winners as they can dramatically improve node positioning (e.g betweenness)
reply
Maybe on the trampoline level it'll be useful. Positioning is one aspect an LSP can taken advantage of it for better connectivity. But also being aware of the graph and a trained model on graph components could be useful for end nodes. I do agree tho, probably more on the LSP level. And long term goals is that trampoline will be a scaling solution for end users and trampoline nodes can take advantage of AI.
reply
Lol you're quick, thanks for sharing and forwarding!
reply
We're interested in combining probing data with gossip data to create a reinforcement learning
My hunch is RL on a dynamic graph won't perform well. Have you defined a problem statement for training the agent?
reply
A dynamic graph is precisely why you would use a learning agent
reply
The dynamic graph of LN probes and gossip data has many temporal dependencies. I.e. past actions affect the topology of the graph. It is challenging to account for these temporal patterns in the problem statement of the RL agent.
reply
We haven't done any work on it yet. If it were strictly based on probing data, I don't think it would work because the graph is dynamic. But there might be something in there by feeding in the dynamic parts via the historical gossip we are saving, combined with the probing data. Even the gossip alone might help identify trends and changes in liquidity / fees across the network.
reply
If you felt a chilly breeze earlier @benthecarman, it's your omission from this split.
reply
I'll get him next week
reply