pull down to refresh
708 sats \ 3 replies \ @based 13 Apr \ on: Isn't utxo set growth a big issue? bitcoin_beginners
It's a problem and it's shameful that Bitcoin is being vandalized. It is what it is, but is it a huge issue? Not big enough to stop Bitcoin I recon, which you can prove yourself.
How many years would it take to reach 100 TB? Can we suppose storing 100 TB will be expensive that year? You can make an estimate given the recent growth rate, and if you find out the hard limits of how many UTXOs can be created in a block, you can calculate the worst case too. That will tell you the severity of the issue.
Note that the UTXO set does not need to fit in RAM, nor does it get significantly more expensive to validate a block if the UTXO set is larger. You didn't say so, but that's something many seem to take granted as the truth and I have no idea why.
Bitcoin Core has a cache in RAM to improve performance. Searching the UTXO set uses indexes for fast lookups. You don't go looking at each UTXO one by one to find the data to validate in the block, so it doesn't get slower because it's larger. This is exactly the same as SQL and other databases, in fact it is a database. Having your entire database data set fit in RAM is certainly nice for performance, but not required unless you require very high performance and low latency. Bitcoin is a system where thousands of transactions need to be processed on average every 10 minutes. It should ideally take only seconds at most but there's no hard requirement. Mostly it will make initial block download slower when setting up a new node without trusting any previous data.
It's already addressed by the block size limit. It limits the rate at which UTXOs can be created.
Completely uninteresting and solves nothing at all unless done by miners who are taking a stance on the issue, leaving profit on the table to do so. Activist miners also cannot be stopped from spending their resources mining blocks with less "spam". They can do additional rate limiting, on top of the block size limit. No one else.
Luke Dashjr has since long wanted to decrease to only 300 KB. But Lightning doesn't thrive with small blocks either I hear, so I don't think he have much or any support.
Making a transaction right now costs several dollars in fees. Would tens or hundreds of dollars in fees be preferable today? Is Lightning ready to step in?
More "annoying" than "existential" on a scale of severity I think. But I'm looking forward to your analysis.
Cheers for the reply.
How many years would it take to reach 100 TB? Can we suppose storing 100 TB will be expensive that year?
I'm amenable to the argument that technological improvements will save us, but at the same time I'd rather not trust in it - although given the trouble it seems anyone is having introducing any change, I'm inclined to think we have no choice.
Luke Dashjr has since long wanted to decrease to only 300 KB. But Lightning doesn't thrive with small blocks either I hear, so I don't think he have much or any support.
That proposal makes sense to me - as a temporary measure until demand picks up. Fees will eventually be high, so that might as well be embraced now. But I appreciate that, in terms of onboarding, this reduces the number of people that will be able to get their own utxo before being priced out, and that the infrastructure for a high fee environment takes time to build.
More "annoying" than "existential" on a scale of severity I think. But I'm looking forward to your analysis.
I'd guess if it ever did become life-or-death, then the utxo set would be pruned. But like you, I don't see it ever becoming that bad. If there is an issue, maybe it's that it's not a problem that will cause shock, but will just slowly get worse with time, and so people will put up with it.
It's not my analysis, but while I was trying to answer this question I found this post: https://bitcoin.stackexchange.com/a/115451
The post (with various assumptions) says 104 years until we reach the maximum amount of utxos that aren't dust.
I was hoping people already had an idea to combat this, but it seems there is no magic solution. I guess the takeaway here is that the problem is only annoying, as you said. And maybe some clever clogs will come up with something :)
reply
Bitcoin is a system where thousands of transactions need to be processed on average every 10 minutes
There are attacks against Bitcoin that use "non-standard" transactions i.e. something that would fail to be broadcast by a node but can technically be mined (the thing MARA is advertising) that could dramatically increase the validation that could make it take longer than 10 minutes to validate a block for some smaller machines like a rpi. Like everything though this attack requires the attacker to have enough money to keep this going in perpetuity and while small single board computers are limited to 8gb ram it's not impossible to imagine in just a few years single board computers having 16-32gb of ram to tap into.
reply
There are lots of ways to attack - clog - spam Bitcoin, the beauty is they all cost money and blocks are limited arresting anything from ballooning too quickly
reply