There's a (sort of) solution to this tail emission discussion, not mentioned in the article.
We will have to hard fork Bitcoin in the future (the year 2038 problem), don't panic, we have time until 2100-ish. With this hard fork, we might do more changes at once, like increase the block size, something we still don't need. I'm sure there will be other issues waiting to be sorted.
Anyway, what we can also do, is to go smaller units than satoshis. We already do use millisats (three decimal spaces) on the LN, we might go much further.
Now, the last halving in 2140-ish will dictate the block reward at 1 satoshi. What if it won't be the last halving but we will have a following cycles, rewarding miners 0.5 sahoshi, the next 0.25 sat, etc?
This way, we won't need to implement the tail emission and the 21 million coin will still be intact.
Note: there's this limiting factor, the max value of a 64 bit integer (18,446,744,073,709,551,615) but I'm sure we'll find a way around it too.
When the block subsidy is 1 sat, the block reward will be 99.99999% fees which hopefully be orders of magnitude higher than sats.
Allowing sub-sat block subsidies does not make sense when they will be eclipsed by fees.
Either the fees will be sufficient or they will not but allowing endless halvings is definitely not the solution.
reply
The havings can't be endless and I agree, the fees will be the main source of miners's income before we have such a fork.
reply
Yes, that's a good point you raise as technology improves the average node could probably handle a lot more and we can safely increase the block size without cutting people off from validating the network and those bigger blocks may result in more fees
Hmmm, interesting so halving forever into units we might not care about now, but care about in the future, or even just having block subsidies held until it hits 1 satoshi before payout
reply
I'm pretty sure the 2038 problem has already been corrected in bitcoin, and didnt require a hard fork because the blockchain format had already taken it into account. The issue was around the difficulty adjustment calculation and the unix timestamp is only used as part of a difference calculation therefore count-overflow is canceled out in the two-weeks of seconds measurement.
reply