I am a relative noob to the more technical stuff, but just thinking about it myself, utxo set growth seems like a huge issue.
If the utxo set grows to be several terabytes - it could eventually be 100s of terabytes? - then that means running full nodes becomes costly. And then it becomes costly to validate transactions in a new block, and so there is more chance that people start trusting others to do the validation for them. And then shenanigans become possible.
The only solution I currently see to a large utxo set is to prune it to a set size, and smaller utxos become unspendable, which doesn't seem ideal.
I know of utreexo, but that seems like it has issues as there is no incentive to run a bridge node. And so we're still trusting people to run those bridge nodes - and it seems that might be costly. Maybe with some tweaks this can be fixed it'll be cheap for most people to validate transactions?
Are there other potential solutions that I've not heard of / that people are working on?
In the mean time, we have people spamming the utxo set with stuff like bitcoin stamps, increasing the size quickly. To me this feels like an attack as the size of the utxo set has long term consequences for the affordability of future full nodes. I would think this needs addressing sooner rather than later, but nobody seems to be panicking about this, so maybe I'm missing something?
I know some people argue for filters, but as I understand it that only prevents someone from getting into the mempool for a node that enforces those filters. It doesn't prevent a miner processing transactions with large utxos. So it doesn't seem more than a band aid on the problem.
I don't understand why people aren't arguing for a block size decrease - that seems like the obvious solution? Since the central problem seems to be that there currently isn't enough competition for space in blocks.
But even if the utxo set was growing at a modest rate, I still don't see what the long term solution is. Maybe there is no avoiding that, in the future, full nodes will be costly to set up and maintain, but it's not an existential risk?
utxo set growth seems like a huge issue
It's a problem and it's shameful that Bitcoin is being vandalized. It is what it is, but is it a huge issue? Not big enough to stop Bitcoin I recon, which you can prove yourself.
If the utxo set grows to be several terabytes - it could eventually be 100s of terabytes?
How many years would it take to reach 100 TB? Can we suppose storing 100 TB will be expensive that year? You can make an estimate given the recent growth rate, and if you find out the hard limits of how many UTXOs can be created in a block, you can calculate the worst case too. That will tell you the severity of the issue.
Note that the UTXO set does not need to fit in RAM, nor does it get significantly more expensive to validate a block if the UTXO set is larger. You didn't say so, but that's something many seem to take granted as the truth and I have no idea why.
Bitcoin Core has a cache in RAM to improve performance. Searching the UTXO set uses indexes for fast lookups. You don't go looking at each UTXO one by one to find the data to validate in the block, so it doesn't get slower because it's larger. This is exactly the same as SQL and other databases, in fact it is a database. Having your entire database data set fit in RAM is certainly nice for performance, but not required unless you require very high performance and low latency. Bitcoin is a system where thousands of transactions need to be processed on average every 10 minutes. It should ideally take only seconds at most but there's no hard requirement. Mostly it will make initial block download slower when setting up a new node without trusting any previous data.
I would think this needs addressing sooner rather than later, but nobody seems to be panicking about this, so maybe I'm missing something?
It's already addressed by the block size limit. It limits the rate at which UTXOs can be created.
filters
Completely uninteresting and solves nothing at all unless done by miners who are taking a stance on the issue, leaving profit on the table to do so. Activist miners also cannot be stopped from spending their resources mining blocks with less "spam". They can do additional rate limiting, on top of the block size limit. No one else.
I don't understand why people aren't arguing for a block size decrease - that seems like the obvious solution?
Luke Dashjr has since long wanted to decrease to only 300 KB. But Lightning doesn't thrive with small blocks either I hear, so I don't think he have much or any support.
Since the central problem seems to be that there currently isn't enough competition for space in blocks.
Making a transaction right now costs several dollars in fees. Would tens or hundreds of dollars in fees be preferable today? Is Lightning ready to step in?
Maybe there is no avoiding that, in the future, full nodes will be costly to set up and maintain, but it's not an existential risk?
More "annoying" than "existential" on a scale of severity I think. But I'm looking forward to your analysis.
reply
Cheers for the reply.
How many years would it take to reach 100 TB? Can we suppose storing 100 TB will be expensive that year?
I'm amenable to the argument that technological improvements will save us, but at the same time I'd rather not trust in it - although given the trouble it seems anyone is having introducing any change, I'm inclined to think we have no choice.
Luke Dashjr has since long wanted to decrease to only 300 KB. But Lightning doesn't thrive with small blocks either I hear, so I don't think he have much or any support.
That proposal makes sense to me - as a temporary measure until demand picks up. Fees will eventually be high, so that might as well be embraced now. But I appreciate that, in terms of onboarding, this reduces the number of people that will be able to get their own utxo before being priced out, and that the infrastructure for a high fee environment takes time to build.
More "annoying" than "existential" on a scale of severity I think. But I'm looking forward to your analysis.
I'd guess if it ever did become life-or-death, then the utxo set would be pruned. But like you, I don't see it ever becoming that bad. If there is an issue, maybe it's that it's not a problem that will cause shock, but will just slowly get worse with time, and so people will put up with it.
It's not my analysis, but while I was trying to answer this question I found this post: https://bitcoin.stackexchange.com/a/115451
The post (with various assumptions) says 104 years until we reach the maximum amount of utxos that aren't dust.
I was hoping people already had an idea to combat this, but it seems there is no magic solution. I guess the takeaway here is that the problem is only annoying, as you said. And maybe some clever clogs will come up with something :)
reply
Bitcoin is a system where thousands of transactions need to be processed on average every 10 minutes
There are attacks against Bitcoin that use "non-standard" transactions i.e. something that would fail to be broadcast by a node but can technically be mined (the thing MARA is advertising) that could dramatically increase the validation that could make it take longer than 10 minutes to validate a block for some smaller machines like a rpi. Like everything though this attack requires the attacker to have enough money to keep this going in perpetuity and while small single board computers are limited to 8gb ram it's not impossible to imagine in just a few years single board computers having 16-32gb of ram to tap into.
reply
There are lots of ways to attack - clog - spam Bitcoin, the beauty is they all cost money and blocks are limited arresting anything from ballooning too quickly
reply