In computer programming, a bad pattern of design that gradually leads to very big problems is caused by creating dependencies between code whereby two separate processes are muddled together and make it very difficult to change one without a slew of knock-on effects requiring changes in the other to make it work again.
I got ChatGPT to explain why this causes problems:
Using both an interface type and a concrete implementation of the interface in the same scope can lead to a few potential problems:
Tight coupling: When an interface and its concrete implementation are used in the same scope, they become tightly coupled. This can make it difficult to change the implementation in the future without affecting other parts of the code that depend on the interface. Limited flexibility: If a concrete implementation is used instead of the interface type, it limits the flexibility of the code, as it becomes harder to swap out the implementation with a different one that satisfies the same interface. This can make the code less modular and harder to maintain. Code complexity: If both the interface type and a concrete implementation of the interface are used in the same scope, it can make the code more complex and harder to understand. This can lead to bugs and make it harder for other developers to work with the code.To avoid these problems, it's generally recommended to use the interface type as much as possible and only use concrete implementations when necessary. This allows for more flexibility and modularity in the code, making it easier to maintain and update in the future.
The commonality between these two mistakes, and this is related also to the Single Responsibility Principle, is that two separate things are being built with lots of strings between them, deceiving you into seeing two things when in fact there is only one.
The CAP theorem states that only two of the characteristics of a distributed system can be the same as a centralised system.
Bitcoin's availability is its weakness. No, not the read side, the write side. is very consistent, although some describe it as weak, "probabalistic finality", is the same thing we mean by Consistency in CAP. No, you can't spam it full of tens of megabytes a blocks for a very specific reason: As it is, an average low spec PC's CPU takes 1-2 minutes to verify and certify a block is fully correct and can be the new best block.
It would only have to be no more than 5-10x the size to lead to a permanently out of sync node, and the time cost escalates with numbers of transactions, that on average take more and more time to trace back to the coinbase, it would be an exponential relationship, with each factor multiplying the final result by a greater amount.
Lightning is the correct solution to providing higher write side availability to the system.
Similar to how CPU caches offload work from the memory bus, Lightning makes use of the fact that in a distributed system, state changes are defined as differentials that have no dependencies on each other. Because addition and subtraction are commutative, over a given period of time, it doesn't matter what differences in order there are between the members of the network of receiving the messages of the addition and subtraction from the balance, in a given period of time, they sum to the same answer and thus achieve system-hood, the same answer from anyone in the network.
Bitcoin is one API, and with safety and security as its priorities, attempting to make it behave the same way as two separate systems that aim for different results is an error that rapidly leads to a system that cannot adapt and not long after, it goes to zero.
Lightning gives you availability, by its peer to peer network, orders of magnitude greater throughput. The maximum rate of processing transactions is bounded by CPU calculating very big modulo operations on 256-512 bit numbers to about 1000-2000 per second as you see during initial block download.
Block validation cannot be substantially parallelised, and the longer the chain gets, the more transactions it has to find per transaction.
Really, all these limitations are baked into the pie by the fact that as an attempt to implement a money, it is implicitly required to make this system global, trustless and neutral. The global thing leads you to the 12 second broadcast latency and 1-2 minutes validation latency, to be trustless, you have to have strong signatures, and to be neutral, the builders have to disappear so the equilibrium of competition provides strong resistance to marginally useful changes.
All of the other electronic money projects in existence are essentially making the same mistake as our tightly coupled software developers are making - Seeing one thing when there is actually two. A distributed system with all three CAP is two distributed systems. Bitcoin and Lightning combine together to bypass CAP.
Big blocks people are making the mistake of comparing their computer, with its sub-metre distances between components, with the expensive, slow computer called "bitcoin", where latency costs are low and the illusion of unity is easy to achieve. Or centralised databases, where the gatekeeping can be parallelised, and the data synchronised without thousands of elliptic curve signatures to verify. Bitcoin can't have big blocks because of physics. These other blockchains only get away with it by trading off decentralisation, and reducing the coordination cost.
Oh, and I hadn't even mentioned the matter of Smart Contracts and how they are yet another, different type of system that does not need to be global and immutable. They are contracts between parties, and at worst need a couple of adjudicators for a dispute, at minimum need third party witnesses.
They don't need the entire world as witnesses, and judges.
Money does though.
Is latency and bandwidth the reason why a distributed system cannot have all three properties of the CAP theorem?
ChatGPT
What features of a distributed system impart partition tolerance?
ChatGPT