I know everyone has ptsd from 2017 but I’m starting to come around to a blocksize increase in like 5-10 years, but I think we need to start planning it now, and nobody wants to have that conversation yet.
I was extremely against the idea of ever hard forking in a blocksize increase, then I heard rusty voice concern over this as well on stephan’s show and it made a lot of sense. We aren’t going to be able to open enough channels and everyone knows it. What if something like channel factories or layer 3 scaling doesn’t work will all of this be for nothing? If we plan a blocksize increase way in the future then at least we’ll have a temporary boost that will buy us more time to scale lightning. Thoughts?
I honestly don't think that's gonna happen. The whole point of keeping blocksize small was to prevent centralization of nodes, as well as many other things. Actually reading 'the blocksize wars' atm.
Look at the mempool the last 2 months. Why do you think we need to increase blocksize? Sounds like your concern is about being able to open enough channels. That is where the fee incentive comes in, you want your channel open right now and you can't wait a week or two, then you will have to pay up(if in a high fee environment) Honestly ever since blockchain.info/blockchain.com started using segwit it has really allowed the mempool to clear more often.. What pod exactly are you referring to with rusty. Is it a SLP?
reply
Yeah it was on a recent episode of slp. Trust me I totally understand the argument about a small blocksize. The more expensive it is to run a node the less number of nodes we have. The amount of lightning adoption in this last year has been incredible. So many new people joining this space running a lightning and a full node for the first time. I don’t have numbers on this but I think there are a ton of people running a full node now who wouldn’t be doing so if it weren’t for lightning. If lightning adoption continues to increase like this we will start to see that channel opening fees will become too expensive and become the bottleneck. TLDR: Demand for L2 creates more L1 nodes. If supply on L1 can’t meet demand of L2 we still get less L1 nodes.
reply
I find this to be a coherent argument. I don't think many technical Bitcoiners would be allergic to such a discussion. I'd categorize this as less a problem with blocksize and more a problem of governance generally. Hard forking the network is just a problem in and of itself and they'll probably only happen when the network is desperate. Hypothetical need/want isn't enough even for a soft fork.
I think the best thing you can do if you're concerned about the issue is follow Rusty's lead and promote the discussion more, so that if this problem appears people are a little more open to hard forking.
reply
I was totally closed off from any blocksize increase at all but changed my mind when I saw it from a lightning perspective. I wonder if other never-forkers would be susceptible to the same framing.
Also don't forget as we increase the blocksize we slowly get away from the fair probability that is supposed to be mining. As we get bigger blocks, it becomes more a race rather than a fair lotto.
reply
Currently we are in the "you need 1TB SSD" state. The question is how many years from now is this going to flip to "you need 2TB SSD".
reply
Storage isn’t the bottleneck bandwidth is, at least for L1 nodes. I suspect the same is true of lightning as well.
Hear me out, can we start talking about a blocksize decrease?
cross input sig aggregation will essentially have the same effect as a block size increase.
Simply putting Lightning on Liquid will fix this issue, and, allow us to decrease the block size 🤔
reply
Is that you, Roger Ver?
reply
What ever happened to him? Haven’t heard a pee from him in like 2 years. Is he still alive?
I think it's way too soon to start talking about blocksize increase. Maybe if we can show years of sustained activity that isn't from some obviously temporary source (like if we had shitcoins or nft hype intermittently) and had already done all of the other things which are possible to make the current blocks more efficient and do not require hardforking, THEN we could have a real discussion about it.
No rush, at all.
Yeah I think it makes sense as hardware improves
  • It'll create more fee carrying tx, which secure the network long-term
  • It allows more L2 interaction, channel creation
  • It's 4 years since the last size increase I think 2x is too big a leap, in one go. It needs to increase gradually, so that the mempool and fees dont go through wild volatility You might not like fees, but that's what will prevent double spend, long-term
With extension blocks it's possible to do it with soft fork.