pull down to refresh


Retweeting this, and also explaining the direct impact on @BtcPayServer. (But first, read Antoineโ€™s post with a cool head, even if you dislike him.)
๐“๐‹;๐ƒ๐‘: Most deployed BTCPay Servers will, sooner or later, be bricked. The change to the OP_RETURN size limit may prevent this. Hereโ€™s why.
When I created BTCPay Server, a hard requirement was keeping costs as low as possible and do not require our users to use command line. Thatโ€™s why we love Bitcoin: everyone can verify.
The most popular option was (and still is) to buy a server on LunaNode. You could host BTCPay for $7 per month (M2 instance).
It comes with a 20GB SSD.
๐‡๐จ๐ฐ ๐ฐ๐ž ๐ค๐ž๐ฉ๐ญ ๐ฌ๐ž๐ซ๐ฏ๐ž๐ซ ๐œ๐จ๐ฌ๐ญ๐ฌ ๐ฅ๐จ๐ฐ
To keep costs down, blocks are stored on a separate volume of ~80GB. Our default deployment runs in pruned mode. This means blocks are downloaded, verified, stored temporarily, and discarded over time. As a result, the volume size is always sufficient.
Bitcoin doesnโ€™t only store blocks; it also stores the current state (the UTXO set), which requires high throughput. Because of this, unlike the blocks, it isnโ€™t stored on the separate volume but on the main drive (20GB).
The UTXO set size had always been relatively stable at the time, so 20GB was considered enough to keep costs low.
Up to 2022, the UTXO set was about 4GB. Now, itโ€™s around 12GB, almost entirely due to spam. Uh oh.
This means very old BTCPay Servers on M2 instances will soon be bricked (since the 20GB must also hold the OS).
If you create a new server on LunaNode today, the default is now M4 (twice as expensive, $14 per month).
If your server is still running on M2, I strongly advise upgrading ASAP. Once the storage runs out, your server wonโ€™t be able to restart. Upgrading after that point may or may not go smoothly. Regardless of all of above, M4 is recommended as it also improves stability.
๐“๐ก๐ž ๐ฉ๐ซ๐จ๐›๐ฅ๐ž๐ฆ ๐จ๐Ÿ ๐ฎ๐ง๐›๐จ๐ฎ๐ง๐๐ž๐ ๐ฌ๐ญ๐จ๐ซ๐š๐ ๐ž ๐ ๐ซ๐จ๐ฐ๐ญ๐ก
If the UTXO set keeps growing forever, itโ€™s a big problem. You couldnโ€™t just set up BTCPay Server and forget about it; youโ€™d need to monitor storage and upgrade your machine periodically. Thatโ€™s a high technical barrier for most merchants.
As Antoine points out, the specific spam has two approaches: one using unspendable outputs, and the other using OP_RETURN.
The nice thing about OP_RETURN is that itโ€™s only stored in blocks, which means your node can safely discard it. Thanks to this, the UTXO set growth can stay stable, or even shrink.
๐“๐ก๐ž ๐ฉ๐ซ๐จ๐›๐ฅ๐ž๐ฆ ๐จ๐Ÿ ๐Ÿ๐š๐ฌ๐ญ ๐ฌ๐ฒ๐ง๐œ
Another issue with a larger UTXO set is fast sync. Setting up a new BTCPay Server could once be accelerated by downloading 4GB of data instead of the entire blockchain.
Now itโ€™s 12GB. As a result, weโ€™ve reduced the frequency of fast sync snapshot updates for BTCPay Server.
This makes bootstrapping a server way slower.
I know all of this doesn't cover all your grievances on Core (why don't they just keep the knobs?).
I know that there exists way to make the problem of the UTXO Set size go away. (why don't they just implement Utreexo, X/Y/Z?)
But suffice to say that those solutions are far from ready, and wouldn't depend solely on Core to work in practice.
Antoine's post is quite interesting too (responding to Mechanic asking why increasing OP_return would be useful).
You know why, you're just engaging in bad faith. But i'll repeat the reason for anyone lurking who's not aware.
The point of making it bigger was that some applications want to have a proof of publication for a modicum amount of data (like Lightning does) in case a specific transaction in an offchain protocol is broadcast.
This data needs to be in the non-witness part of the transaction, so they can't use the (cheaper and already-available) inscription mechanism. They also cannot use OP_RETURN outputs because of the misguided 80-byte policy limit on those.
If they only wanted to store data onchain, they wouldn't be too concerned about policy limits. They could just have leveraged private APIs to miners (as they unfortunately already do for other transactions in their protocol), or even just used Libre Relay.
But what they are really interested in isn't to store this small amount of data, is to do so while using the p2p transaction relay network. This is why the policy limits were a binding constraint for them, and are not one for people who just want to store arbitrary data onchain regardless of how it gets there. The reason why they want to use the p2p transaction relay network is because their transactions are time-dependant (again, like in Lightning) and the p2p transaction relay network is the best mechanism available today to propagate your transaction to as many miners as possible in a timely manner, while making it hard for an adversary to prevent its propagation.
Because they wouldn't give up this property, important to the security of their protocol, they routed around the OP_RETURN policy limit by storing the data in unspendable outputs instead. This method is strictly more harmful to everybody, and it was incentivized by the misguided policy limits on OP_RETURN outputs (which don't achieve anything anymore since inscriptions and since people have started being serious about bypassing mempool policy).
Because the policy limit on OP_RETURN outputs was not achieving anything but incentivizing harmful behaviour, Bitcoin Core contributors decided collectively after long discussions to get rid of it.
reply
0 sats \ 0 replies \ @carter 1h
This is what i've been saying! we need a way to allow people to return data without stuffing it into random fields. OP_RETURN at least helps (even if its more expensive) because it allows people to explicitly say "this is data" so you don't grow the utxo
reply
0 sats \ 1 reply \ @xz 5h
Do many BtcPayServers still run on home servers like RasPi or similar?
reply
And Umbrel too
reply