Hey I’m Dusty!
I’m the first to implement and do splicing for the Lightning Network and also an independent contributor to Core Lightning as well as the lightning bolt spec.
Ask me anything!
You could ask me about
  • How Lightning works
  • What working on the code is like
  • What splicing enables
  • How splicing works
  • Questions about splicing code
  • Questions about lightning code
  • The Lightning spec process
  • The status of the splicing spec
  • What’s left to do to finish splicing
With a new implementation like splicing, I am curious what the "adoption" process looks like for getting node operators to use and implement the feature (assuming this is how it works)?
Is this something where you reach out to operators with the new API/docs and work with them to build it into their workflow? What does the testing/release process look like for this? (I apologize in advance if this is already documented somewhere!)
The testing / releasing process is pretty thorough. Other Lightning devs read all the code you’ve written, work to understand it, and review it completely.
At the same time automated tests will create thousands of lightning nodes on an internal test network and purposely use them abusively. If any of these tests fail than it doesn’t get in.
Once through that it will be released as an experimental feature seeking node operators to use and test it in the wild. After staying there for probably quite a while, it would be officially released.
The docs describe how to use it and that’s kind of it. At that point the only way for people to know to use it is digging through the docs or outreach telling people they can.
I suspect the larger nodes are already paying attention though and will eagerly adopt these things without needing any coaxing.
This makes a ton of sense - thanks!
Is there a "Product Manager" of sorts who works with node operators to determine what they want/need or to manage the feature roadmaps? Would this person work as a full-time employee for Blockstream?
I am curious how you "selected" splicing? Was this your first feature in a part of your getting started story you mentioned? Do you work for Blockstream, or did you choose between starting on CLN vs LND, etc?
I thought about contributing to LND but Core Lightning felt like a better fit.
Ironically last year I thought splicing was already done 🤣. What had been done was the spec being completed.
When I realized no one was actually implementing it yet I got excited to do it. The feature just kind of makes logical sense to add. It’s simple conceptually but gets complex as you get into the weeds.
I’m not aware of a product manager over at Blockstream 🤷‍♂️
Thanks for all of the background! I can only imagine the complexity of all of the moving pieces 😅
how will splicing impact LN node operators?
For routing nodes splicing enables some cool liquidity management options.
By resizing channels you can move extra onchain funds into a valuable channel as well as move funds out of a less valuable channel.
You can even do both at the same time, moving funds between channels and there’s essentially no limit on the number of channels you can do this to at once.
And it’s all in the most efficient method possible (a single transaction) without relying on any central coordinator.
Thanks for everything you do, and taking time for an AMA!
I am curious as to your general day-to-day as a CLN engineer... what are some of the biggest differences working in this space as compared to any of your previous work? How did you get started in this role/space?
Thanks man!
Day to day it’s a lot of reading and writing code. Splicing is a bit of A doozy. It affects so many other parts of code base I often find myself jumping around having to understand more and more of all that’s going on compared to a more insular feature.
Compared to previous work it’s definitely a lot lot harder. There’s two main reasons for that a think:
  1. Lightning is just complex. There’s a saying I like, “lightning is all the parts of bitcoin you didn’t understand plus a whole bunch of new things you didn’t understand.”
  2. It’s all being written for the first time. A common technique for other coding projects is to search stack overflow for a problem you run into and finding premade solutions. There are also lots of tools out there to help with common coding problems. With lightning problems that come up neither exist so you’re just on your own. It’s very frontier work feeling.
I got started by… just getting started I guess lol. I spent a few months just reading the code base and trying to fix little things here and there as I went. @niftynei was very generous and continually helped me work through parts that were complex to understand. As a lightning dev veteran she has a lot of deep knowledge on it all.
In the end they’re all open source projects and anyone can contribute. And how cool is it to work on Lightning? I think it’s pretty great to be able to anyway 🤣.
I haven't heard that saying, but it sounds very accurate 😂 and without S/O 🤯
Thanks for your insight - and work to push everything forward!
Dusty Daemon works in the day, mon Works all day on the LN daemon I'm glad we have someone to count on Unlike Alex Mashinsky, that big con
It’s a poem! Wow thanks man 🙏
Do you wish it was easier or more difficult for developers to make changes to the Lightning protocol?
I think the process is the right level of difficulty at the moment.
At its core the protocol is really defined by what features nodes have & enable, negotiated between two of them.
The spec process doesn’t really have control over that, it’s more of a coordination effort that provides a place for the Lightning developers to both establish interoperability and also get vital feedback from each other.
Since we’re coding money itself that feedback process is great for revealing potential problems early on that one person may be more equipped to understand than another.
It also functions as a blue print for future Lightning implementations and I think that’s important in its own right.
what kinds of problems might a future LN node implementation solve that the existing implementations can’t or won’t solve?
Hm that’s an interesting question and kind of hard to speculate on. Before LDK came along I didn’t think an enterprise-first style implementation would exist and now we have one of those with phantom nodes and other cool scale based features.
I think pretty highly of the existing Lightning devs out there. If they won’t add a particular feature I suspect there would probably be a good reason for not doing it.
In the near term it does feel like there is just a ton to be done and too few to do it all. But perhaps one day lightning will ossify and we’ll look be able to look back and say oh hey it’s done.
I don’t see a day like that coming anytime soon though.
One area Lightning devs aren’t working on is automated channel management and rebalancing and I do suspect that area will see some pretty awesome innovation over time. If that progress well, and I suspect it will, we’ll eventually have one button lightning node deployments that just do everything for you automatically.
appreciate the insight, thanks!
Yo Dusty! Wondering what the UX could be like for splicing. Like how will a node operator decide between doing a rebalance vs. a channel splice vs. opening a fresh channel?
I think they’ll probably do all three depending on their needs!
A lot of lightning nodes keep a reserve of onchain funds ready to use for a new channel they might need later.
Splicing allows them to keep that reserve in Lightning and move it around freely.
The UX of it kind of depends on the front end you use like say RTL.
Hey Dusty! Super excited that you’re working on splicing! Really amazing liquidity management for node operators and really cool wallet models open up once we have splicing. Such a neat new primitive!
What’s the status of the spec and what’s left to finish splicing?
Thanks man!
The spec is like 98% done but things tend to come up about the spec that need changing as it gets implemented. For example should we use the existing funding_locked message or have a custom splice_locked. Eclair has a 65kB message limit so we need to add a way to split commit_sig into 65kB chunks. Originally we weren’t worried about the order tx_signature was sent between nodes but it became apparent it did matter over time and the lowest sats should sign first. That also meant commit_sig needed to have a matching order.
So ya know, things like that have been popping up. Smaller in comparison to the whole spec but still all important.
Implementation wise it’s basically functionally done, with a few odds and ends to implement. Instead of finishing the odds and ends though I’m switching over to building unit tests which will give me a clear road map off all the odds & ends.
Super! Thanks for the update. I was wondering how much whack a mole you were playing with funny edge cases in different implementations :-)
Dusty we love you😍
I'm concerned about Taro which is attacking Bitcoin by
  1. issuing assets like stablecoins on the most decentralized and secure blockchain
The problem is that in the case of a fork stablecoin issuers get to weigh in or even pick a winner. Ethereum is there already:
stablecoin issuers in control
Curve Finance: Yes, they are in control of Ethereum, essentially
Do you think LN devs should really go through with Taro?
Taro’s not something I’m that interested in personally.
Having not looked into it too deeply it does look pretty quarantined away from the main protocol for what that’s worth.
Asset issuing has been a thing on the Bitcoin blockchain for ages (Counterparty being one example). It's not new.
Yes and this is bad. It turned out not to be a problem only because nobody used it.
What's something you believe about Bitcoin that few bitcoiners agree with you on?
Probably that bitcoinization will bring back the prosperity America experienced during the gilded age of the 1880s and 1890s.
A lot of people think hard money will cause some kind of recession before bringing prosperity but I suspect it will bring prosperity quite rapidly.
Thanks! I now have a better idea about splicing as a node operator I am excited to see these features come . Rebalancing is a pain and costly
If splicing were a taco, what flavor would it be?
Definitely a double meat steak taco
I do realize that this question is kinda late, but when you published the first mainnet splice, the old and new channel UTXO's shared the same address. As address reuse is very bad for privacy, is there any plans to fix this? Is it even possible?