pull down to refresh

This seems like a pretty good model of what filtering valid transactions does to the Bitcoin network.
Laurent concludes that "At 90%, propagation to miners starts to become unreliable and propagation to non-listening nodes and to filtering listening nodes is already well damaged."

What does this mean?

On the one hand, it means that unless more than 90% of the network is running a filter it likely won't change what ends up in blocks.
On the other hand, it means that filters can affect nodes even at lower levels of adoption, especially non-listening nodes, leading to negative side effects.
Whether you want to see more filters in Bitcoin or not is immaterial. Laurent makes a very good point: "We can't prevent people (honest or malicious actors) from running filters on their nodes. It's in our collective interest to minimize the negative effects filters have on non-listening nodes (even those who don't use filters)."

The simulation

New simulation of transactions propagation on a network of 2k nodes with 20% of listening nodes.1 But this time we will monitor the propagation at a more detailed level, based on 2 criteria: the type of node and the filtering status...
The idea (suggested by @Murch) is that propagation to non-filtering listening nodes will be a good proxy for evaluating how well transactions propagate to "tolerant" miners, and thus more likely to be included into a block...

"From 0 to 50% of adoption, filters have almost no effect on transactions propagation. Almost all nodes see all transactions."

"At 60% of adoption, filters have no effects on listening nodes but we can observe some limited negative effects on the propagation to non-listening nodes: some transactions aren't received by all non-listening nodes."

"At 70% of adoption, there is still no effect on listening nodes but the negative effects on non-listening nodes worsen. More transactions and more non-listening nodes are impacted."

"At 80% of adoption, the negative effects on non-listening nodes worsen even more. On their side, the listening nodes enter a dual regime: the non-filtering nodes continue to receive all txs while some txs are now poorly propagated to filtering nodes."

"At 90% of adoption, the situation worsens on all fronts. At this point some transactions don't reach all non-filtering nodes."

The main observation resulting from this simulation is that non-listening nodes are the first impacted by a new filter (starting at ~60% of adoption) followed by filtering listening nodes (at ~80%) and non-filtering listening nodes (at ~90%).
A second observation is that relaxing an existing and widely adopted filter is potentially far more damaging to the network than adopting a new filter as it directly puts the network into its "chaotic" regime.
At last, let's note that adding a secondary cache storing rejected txs won't be enough to solve the negative side effects that filters have on nodes (let's say beyond 80% of adoption) since a node can't cache what it hasn't received.
Some of the damage that filters may do to nodes is the following:
  • Nodes have to play catch up when blocks are found with txs they don't already know about
  • Nodes might not know about incoming txs that are important to them
  • Nodes have less reliable feerate estimations
  • Longer block reconstruction at network level (since more nodes have never received the transaction)
In another post, Laurent has this description of the purpose of filters, which is pretty accurate: "protection of nodes + nudging devs/users into best practices."

Footnotes

  1. A listening node is a node that is not open to incoming connections. It gets all its information about new blocks and new transactions from nodes with which it initiated the connection (in this simulation 8 per node + 2 blocks-only connections), while listening nodes had 8 outbound connections, 2 blocks-only connections and inbound connections).
this territory is moderated
102 sats \ 1 reply \ @d01abcb3eb 10h
I didn't see this post and it didn't show as a duplicate when I posted. But I linked to xcancel, so that may be why. I deleted my dupe, so 30 sats down the drain.
Anyway. My interpretation here is that during the initial stages of Core 30 deployment we'll actually have the scenario of high adoption of a filter and thus potentially chaotic TX-propagation.
reply
Yes. I wonder how long it will persist for. Keeping in mind all old versions of Core will be still at the old OP_RETURN limit, if adoption of Core 30 is like 5-10%, it seems like it could be a strange time.
reply