pull down to refresh
708 sats \ 3 replies \ @based 13 Apr \ on: Isn't utxo set growth a big issue? bitcoin_beginners
It's a problem and it's shameful that Bitcoin is being vandalized. It is what it is, but is it a huge issue? Not big enough to stop Bitcoin I recon, which you can prove yourself.
How many years would it take to reach 100 TB? Can we suppose storing 100 TB will be expensive that year? You can make an estimate given the recent growth rate, and if you find out the hard limits of how many UTXOs can be created in a block, you can calculate the worst case too. That will tell you the severity of the issue.
Note that the UTXO set does not need to fit in RAM, nor does it get significantly more expensive to validate a block if the UTXO set is larger. You didn't say so, but that's something many seem to take granted as the truth and I have no idea why.
Bitcoin Core has a cache in RAM to improve performance. Searching the UTXO set uses indexes for fast lookups. You don't go looking at each UTXO one by one to find the data to validate in the block, so it doesn't get slower because it's larger. This is exactly the same as SQL and other databases, in fact it is a database. Having your entire database data set fit in RAM is certainly nice for performance, but not required unless you require very high performance and low latency. Bitcoin is a system where thousands of transactions need to be processed on average every 10 minutes. It should ideally take only seconds at most but there's no hard requirement. Mostly it will make initial block download slower when setting up a new node without trusting any previous data.
It's already addressed by the block size limit. It limits the rate at which UTXOs can be created.
Completely uninteresting and solves nothing at all unless done by miners who are taking a stance on the issue, leaving profit on the table to do so. Activist miners also cannot be stopped from spending their resources mining blocks with less "spam". They can do additional rate limiting, on top of the block size limit. No one else.
Luke Dashjr has since long wanted to decrease to only 300 KB. But Lightning doesn't thrive with small blocks either I hear, so I don't think he have much or any support.
Making a transaction right now costs several dollars in fees. Would tens or hundreds of dollars in fees be preferable today? Is Lightning ready to step in?
More "annoying" than "existential" on a scale of severity I think. But I'm looking forward to your analysis.
FYI, I looked around, and the link to "understand your consumption" is broken on the plan types help page.
It's literally not a podcast if it's not an audio file available for download for you to listen to as you please and with an RSS or Atom feed as an interface.
Spotify is following the "embrace, extend, and extinguish" playbook in their attempt to kill podcasting for their benefit. It's a great thing it failed.
Automatic rebroadcast from the mempool could help somewhat, though not as well as being able to pull from peers in this situation. The idea in the PR below is both reactive and initiated by the network rather than the miner. It was closed and I can't tell if it's being worked on elsewhere. It seemed like a good idea and another piece of the puzzle, I was looking forward to it.
Are there privacy risks associated with a peer serving its mempool to others? Like perhaps fingerprinting a node using darknets, if that matters.
Can anyone confirm if this is true/real?
Since there's no details whatsoever, it can't be verified. I don't have reason to doubt it's true, but if there's a bug report it has been done in private so we don't know.
The article concludes (among other things) that filtering the mempool of non-mining nodes is desirable, but I found no argument for why this is.
A significant difference is the Liquid network doesn't use yet another token to enrich investors. Instead it uses L-BTC natively and which cannot be printed out of thin air.
Bitcoin requires fees to compensate miners without a tail emission. Bitcoin requires congestion to develop said fee market.
Lightning was meant to scale Bitcoin. If you are correct in that Lightning requires the Bitcoin network not to be congested and if there's no solution to this within Lightning, then Lightning is flawed and something else is needed.
Otherwise, get used to it and improve Lightning so that it works. If it cannot work, abandoning it is the right move.
Nevertheless it was bound to happen one way or the other. Either by successfully congesting the network using monetary transactions or by spam making monetary transactions more expensive than they would have been.
Either way, fees is the way, so best get used to it and build what's needed to still be useful despite congestion, whatever the cause.
Being closer just means faster.
Being shared just means there being less available in total.
None of which makes up for not having enough memory.
10 sats \ 0 replies \ @based 7 Oct 2023 \ parent \ on: AMD Announces Launch Of New Bitcoin Miner bitcoin
AMD64 would like a word with you.
If wallets of today have turned it around to make RBF effectively opt out instead then I agree it's less of a point.
Having to opt into being able to solve a simple mistake in advance is not a good user experience. It's asking users to plan their mistake and know to do so.
If always enabled privacy is somewhat better, because it leaks less information about the user wallet software and user habits.
Bitcoin settles on-chain. Settling in the mempool is not by design.
Bitcoin must be scaled up in layer and the current solution to small fast payments is Lightning.