pull down to refresh
Didn’t have time to watch the whole thing, but Segwit and Taproot were even more elegant than Wicked explained in the first few minutes. The new rules only apply to blocks that contain segwit inputs or taproot inputs respectively.
Regarding segwit, if your block does not contain any segwit transactions, it doesn’t require the newly introduced witness commitment. As unupgraded nodes would consider transaction inputs following the new rules non-standard, they would not collect such transactions into their mempool and not include them in their block templates. Throughout segwit signaling, and even after segwit was activated, unupgraded miners could therefore still build valid blocks that simply did not include any segwit transactions.
For Taproot similarly, transactions containing P2TR inputs would be considered non-standard, and miners would not include them by default.
Since neither of these two soft forks’ deployment mechanisms had a mandatory signaling phase, unupgraded miners would just chug along fine on the same chain throughout signaling and even after activation, beyond potentially missing out on juicy segwit/taproot transactions.
The assumevalid feature still checks that all transactions’ content matches the corresponding txid and that all txids are committed to by the block. This already guarantees that you get exactly the same blockchain byte-by-byte as everyone else on the network. It also runs most of the other checks transactions and blocks. E.g., that transactions are well-formed, there is no double-spending, the proof-of-work rules, block weight limit, etc.
The assumption that assumevalid makes is that the scripts in the transactions are valid, i.e., that the signatures, input scripts, and output scripts would evaluate satisfactorily. This assumption is reasonable, because the transactions are buried by months of proof of work (or however much time you set a manual assumevalid point into the past), and the entire network has been building on these transactions for months. If any of the scripts had been invalid, nodes should have rejected the transactions months ago. It would be entirely unexpected for anyone to spend all this proof-of-work to extend an invalid chain. However, this does represent a (small!) security reduction traded-off for a big speed-up.
Even if you configure a custom assumevalid hash, your node will always follow the most-proof-of-work chain it learns about. If the configured hash is not in the chain your node is processing, it will do full script validation for the entire blockchain instead.
Either way, you always trust the software you’re running to work as advertised. If the software is malicious or has defects, you could have any sort of unacceptable behavior, so that’s not a new assumption.
Checkpoints are a related but different concept which Bitcoin Core no longer uses.
Oh, I thought it was common knowledge that mempool.space put out both
sometime after the inscription shenanigans started. — Well, more precisely, presumably when they came up with the Mempool Goggles feature.
Sort of. There is a bit of a terminology mess there. There used to be actual orphan blocks before we had header-first synchronization, which would be blocks for which we didn’t know the parent block. That can’t happen anymore, because we always announce headers first and only retrieve blocks for which we know the parent block.
We refer to blocks that are not part of the best chain as extinct blocks or stale blocks, but a long time ago someone used the term orphan blocks for that and it stuck. Presumably, the term stems from Bitcoin Core labeling the block reward of a stale block as “orphaned” in the code as it’s not part of the best chain.
Concept meh.
- this is extremely blockspace inefficient: legitimate owners need 50 txs to move one UTXO
- mandates address reuse, which is bound to leak information on the coin owners usage patterns
- creates a massive competition for inclusion among all remaining P2PK UTXOs upon activation with a potentially huge portion of the P2PK UTXOs being turned into fees for users wanting to get them out sooner than in 32 years
If the idea is to turn the remaining P2PK UTXOs into a ~constant tail emission, it would be more honest to propose a hardfork that does that.
Scoresby's critique assumes that BIP-110 could fail to activate, but it can't. There's no timeout or "failed" state. Mandatory signaling forces lock-in at max_activation_height, regardless of organic support. The chain-split scenarios described rely on minority hashrate activation, but the 55% threshold prevents this.
Uh… Your own article has table columns that are labeled “BIP 110 doesn’t activate”:
And you see, that’s where you lose me: when RDTS activates, all nodes running RDTS software will start to enforce it. If that covers only a minority of the hashrate, where does the additional hashrate suddenly come from to suddenly make the hashrate of the RDTS chaintip jump to 55%?
If e.g., 10% of the hashrate runs RDTS and participates in the mandatory signaling, the first block not signaling spawns a separate chaintip that with 90% of the hashrate builds block 9× as fast and just leaves the RDTS nodes behind.
I would suggest that you first address the points @Scoresby raised in the OP, but beyond that for starters:
- You are wrong about the fork activating immediately upon reaching the threshold
- It makes no sense to claim that miners should signal late. If they support the proposal signaling early would help build momentum and be less risky.
- You misrepresent the potential downsides of being on the wrong side of the soft fork attempt
Thanks for sacrificing your time so we didn’t have to. From what you show, Carvalho’s description is completely untenable.
You might find these Bitcoin Stackexchange topics interesting:
- What is the Lightning Network proposal? What problem is it trying to solve?
- How does the Lightning network work in simple terms?
(Disclosure: my own posts.)
Yo @sorukumar, I think this one will interest you.
Actually, I don’t think it’s hard. A payment transactions can move any amount of money in fairly small amount of weight. Spam transactions generally take more weight per operation and most of them don’t have an open ended value to their senders.
With the waning interest in inscriptions and runes, feerates of spam transactions are minuscule:
(from this dashboard: https://dune.com/murchandamus/inscription-brc20-weight-and-percentage)
E.g., yesterday, spam transactions paid 4.7% of fees for 47% of the blockspace. Payment transactions paid 95.3% of fees for 53% of the blockspace. So, payment transactions are currently paying about 18× higher feerates on average than spam. To me that sounds like spam transactions are only bidding on underdemanded blockspace at the very bottom of the mempool around 0.1 s/vB. As the mempool is clearing out, they finally are selected into blocks after sitting there for an indeterminate amount of time. There just seems too little demand for blockspace from payment transactions right now.
when inscription spam drives fees through the roof, regular people can’t afford to make on-chain transactions anymore
We must be living in different universes. The mempool cleared this week to 1.04 blocks worth of transactions waiting. The feerate necessary to be in the next block is currently 0.1033 s/vB. The spam transactions make up the absolute bottom of the mempool, their feerates are a fraction of those of payment transactions. So that “market failure” your son(?) is decrying seems to be insufficient demand for payment transactions. How is that supposed to be fixed by BIP 110?
BIP 322 basically creates a fake transaction and uses the signatures on that transaction to attest that the owner controls these UTXOs. For hash-based output scripts the inputs must reveal the corresponding input script for the signatures to be verifiable.
Personally, I’m still convinced that we will not see any CRQC in the next four decades, and therefore I don’t find it concerning to show public keys. Unless you assume that there are CRQC, public keys being public is not an issue.
Bad bot. Taproot does not use Merkleized Abstract Syntax Trees, it uses Merkelized Alternative Script Trees.