pull down to refresh

We knew from block 0 that, at the margin, fees will be the only way for miners to make money, so there must come a point when fees will surpass block subsidy. Fighting or delaying something that's inevitable is pointless.
SegWit was a "mistake" in that sped us up to this point. We need to accept that and learn to live with it.
What we need is for certain Bitcoin Core developers to get off their high horses and start solving practical problems for the future, such as, how will a node still run on a Raspberry Pi if every single person on earth owns at least one utxo (and AI agents may have many more).
Alternatively, they need to switch their talent to L2 and get Lightning out of its growing pains phase so that Uncle Jims can safely operate all over the world for their families/communities.
What we need is for certain Bitcoin Core developers to get off their high horses and start solving practical problems for the future, such as, how will a node still run on a Raspberry Pi if every single person on earth owns at least one utxo (and AI agents may have many more).
Could you please elaborate what problems you expect with that?
reply
As far as I know, Bitcoin Core keeps the entire utxo set indexed in memory so that it can validate incoming transactions and blocks faster. 8 billion utxos would require 10s, if not 100s of GBs of RAM. This is completely infeasible not only for top tier Raspberry Pis but also any commodity hardware.
reply
Bitcoin Core does not keep the entire UTXO set in memory. Starting from an empty UTXO cache, we only add UTXOs that are needed to validate transactions and new UTXOs created by blocks we process. (The mempool maintain its own implicit UTXO set across the unconfirmed transactions it holds.) Whenever the UTXO cache reaches its limit, all changes are persisted to disk and the UTXO cache is flushed. IIRC, it is also flushed every 24h at the latest. If a newly created UTXO is spent (by a confirmed transaction) before we flush, that UTXO is deleted from the cache immediately and never persisted to disk.
A node would be able to synchronize the blockchain faster and validate blocks with previously unseen transactions more quickly if it could keep the entire UTXO set in memory, but even if you have a set a huge dbcache that can only happen once: when you first start your node and synchronize until your node is restarted (or has run for 24h). Unless your node must absolutely minimize transaction validation and block processing time, it is also completely unnecessary. Once you are caught up with the chaintip, your node learns about unconfirmed transactions throughout. We validate transactions before adding them to the mempool, at which point we have already retrieved all necessary UTXOs from disk, and also cache their script validation. When a block comes in, we only need to retrieve any UTXOs for transactions that the node does not have in its mempool.
It would also take north of 10 years to create that many UTXOs, even if we designated all blockspace to that purpose. So whatever Raspberry Pi successor people run at that point will hopefully be a bit more beefy than today.
reply
In that case, I am genuinely curious why people make all the "utxo bloat" an issue (it's one of the contentions against Ordinals)? Because disk space, unlike RAM, should not be a concern these days.
reply
A larger UTXO set does push up the minimum chainstate a pruned node needs to keep around, and more UTXOs make it (slightly) slower to load UTXOs from disk, while also requiring the UTXO cache to be flushed slightly more often. It is more of a graceful degradation, though.
reply