pull down to refresh

In computer programming, a bad pattern of design that gradually leads to very big problems is caused by creating dependencies between code whereby two separate processes are muddled together and make it very difficult to change one without a slew of knock-on effects requiring changes in the other to make it work again.
I got ChatGPT to explain why this causes problems:
Using both an interface type and a concrete implementation of the interface in the same scope can lead to a few potential problems:
  • Tight coupling: When an interface and its concrete implementation are used in the same scope, they become tightly coupled. This can make it difficult to change the implementation in the future without affecting other parts of the code that depend on the interface.
  • Limited flexibility: If a concrete implementation is used instead of the interface type, it limits the flexibility of the code, as it becomes harder to swap out the implementation with a different one that satisfies the same interface. This can make the code less modular and harder to maintain.
  • Code complexity: If both the interface type and a concrete implementation of the interface are used in the same scope, it can make the code more complex and harder to understand. This can lead to bugs and make it harder for other developers to work with the code.
To avoid these problems, it's generally recommended to use the interface type as much as possible and only use concrete implementations when necessary. This allows for more flexibility and modularity in the code, making it easier to maintain and update in the future.
The commonality between these two mistakes, and this is related also to the Single Responsibility Principle, is that two separate things are being built with lots of strings between them, deceiving you into seeing two things when in fact there is only one.
The CAP theorem states that only two of the characteristics of a distributed system can be the same as a centralised system.
Bitcoin's availability is its weakness. No, not the read side, the write side. is very consistent, although some describe it as weak, "probabalistic finality", is the same thing we mean by Consistency in CAP. No, you can't spam it full of tens of megabytes a blocks for a very specific reason: As it is, an average low spec PC's CPU takes 1-2 minutes to verify and certify a block is fully correct and can be the new best block.
It would only have to be no more than 5-10x the size to lead to a permanently out of sync node, and the time cost escalates with numbers of transactions, that on average take more and more time to trace back to the coinbase, it would be an exponential relationship, with each factor multiplying the final result by a greater amount.
Lightning is the correct solution to providing higher write side availability to the system.
Similar to how CPU caches offload work from the memory bus, Lightning makes use of the fact that in a distributed system, state changes are defined as differentials that have no dependencies on each other. Because addition and subtraction are commutative, over a given period of time, it doesn't matter what differences in order there are between the members of the network of receiving the messages of the addition and subtraction from the balance, in a given period of time, they sum to the same answer and thus achieve system-hood, the same answer from anyone in the network.
Bitcoin is one API, and with safety and security as its priorities, attempting to make it behave the same way as two separate systems that aim for different results is an error that rapidly leads to a system that cannot adapt and not long after, it goes to zero.
Lightning gives you availability, by its peer to peer network, orders of magnitude greater throughput. The maximum rate of processing transactions is bounded by CPU calculating very big modulo operations on 256-512 bit numbers to about 1000-2000 per second as you see during initial block download.
Block validation cannot be substantially parallelised, and the longer the chain gets, the more transactions it has to find per transaction.
Really, all these limitations are baked into the pie by the fact that as an attempt to implement a money, it is implicitly required to make this system global, trustless and neutral. The global thing leads you to the 12 second broadcast latency and 1-2 minutes validation latency, to be trustless, you have to have strong signatures, and to be neutral, the builders have to disappear so the equilibrium of competition provides strong resistance to marginally useful changes.
All of the other electronic money projects in existence are essentially making the same mistake as our tightly coupled software developers are making - Seeing one thing when there is actually two. A distributed system with all three CAP is two distributed systems. Bitcoin and Lightning combine together to bypass CAP.
Big blocks people are making the mistake of comparing their computer, with its sub-metre distances between components, with the expensive, slow computer called "bitcoin", where latency costs are low and the illusion of unity is easy to achieve. Or centralised databases, where the gatekeeping can be parallelised, and the data synchronised without thousands of elliptic curve signatures to verify. Bitcoin can't have big blocks because of physics. These other blockchains only get away with it by trading off decentralisation, and reducing the coordination cost.
Oh, and I hadn't even mentioned the matter of Smart Contracts and how they are yet another, different type of system that does not need to be global and immutable. They are contracts between parties, and at worst need a couple of adjudicators for a dispute, at minimum need third party witnesses.
They don't need the entire world as witnesses, and judges.
Money does though.
I just made a few prompts on gpt about this and just wanted to share them as they help further support my thesis:

Is latency and bandwidth the reason why a distributed system cannot have all three properties of the CAP theorem?

ChatGPT

Yes, latency and bandwidth are factors that contribute to the reason why a distributed system cannot have all three properties of the CAP theorem. The CAP theorem, also known as Brewer's theorem, states that in a distributed system, it is impossible to guarantee all of the following three properties simultaneously: consistency, availability, and partition tolerance.
Consistency means that all nodes in a distributed system see the same data at the same time. Availability means that every request to the system receives a response, without guaranteeing that it contains the most recent version of the data. Partition tolerance means that the system can continue to function even if communication between nodes is lost or delayed.
Latency and bandwidth impact the ability of a distributed system to maintain consistency and availability. In order to maintain consistency, a system may need to ensure that all nodes have access to the same data at the same time. However, network latency and bandwidth limitations can make it difficult to synchronize data across multiple nodes in real-time, which can lead to inconsistencies.
Similarly, availability can be impacted by network latency and bandwidth. If a node is unable to respond to requests within a certain time frame due to network latency or bandwidth limitations, the system may be unable to maintain availability.
Partition tolerance, on the other hand, is not impacted by network latency and bandwidth. It is a property that relates to the ability of the system to function even if network partitions occur.
Therefore, while latency and bandwidth can impact the consistency and availability properties of a distributed system, they do not directly impact partition tolerance, which is why a distributed system cannot have all three properties of the CAP theorem simultaneously.

What features of a distributed system impart partition tolerance?

ChatGPT

Partition tolerance is a property of distributed systems that refers to the ability of a system to continue functioning properly even when network partitions occur. In other words, a partition-tolerant distributed system can maintain availability and consistency even in the presence of network failures or network partitioning events.
The following features of a distributed system can impart partition tolerance:
Replication: A partition-tolerant distributed system must have the ability to replicate data across multiple nodes. By replicating data, the system can continue to function even if some nodes are unavailable due to network partitions.
Consensus algorithms: A partition-tolerant distributed system must use consensus algorithms to ensure that all nodes in the system agree on the state of the system. Consensus algorithms help to maintain consistency and ensure that all nodes have the same view of the data.
Failure detection: A partition-tolerant distributed system must have the ability to detect node failures and network partitions in real-time. This allows the system to adjust its behavior and maintain availability and consistency even in the presence of failures.
Routing algorithms: A partition-tolerant distributed system must use routing algorithms to ensure that messages are delivered to the correct nodes even in the presence of network partitions.
By using these features, a distributed system can maintain partition tolerance and continue functioning properly even in the presence of network failures or partitioning events. However, it is important to note that achieving partition tolerance often requires trade-offs in terms of consistency and availability, as described by the CAP theorem.
I also need to mention Synchrony, which is defined as the enforcing of a definite upper bound on the time of travel for messages. Full synchrony would mean fractions of nanoseconds of time of travel, strong synchrony would be in the small milliseconds latency, and and weak synchrony is anything from a few seconds days.
Proof of Stake systems have strong synchrony, typically under 1 second drift between the small number of nodes in them.
But I need to revise my statement about the proper attribute balance of bitcoin:
Availability in Bitcoin is strong, you can point to the number of replicas as a metric for this, and it's somewhere in the range of 18 and 25 thousand copies of the complete chain, in actively online nodes.
Consistency is weak, probabalistic. You may have idly wondered why everyone says "6 confirmations to clear"... Well, if you say that it's a 50% chance of a fork in the first block, and if you compound that 6 times you get 1/32, or 0.015625 as a decimal fraction.
Forks in Bitcoin tend to happen when multiple new block solutions are found within a short time of each other. Because of the propagation delay, and perhaps compounded by transient latency inbound on the network of the miner of the second block, basically due to the weak synchrony, there is a period of time after a new block is found where multiple solutions can exist, ie, network partitions.
The "heaviest chain wins" rule tends to rapidly resolve the partitions, and this is the secret sauce that makes for Bitcoin's Double Spend protection - because of being a decentralised system, it is difficult to coerce enough nodes to not send a transaction to other nodes, and once a node sees both transactions they make the same decision.
Yeah, so, definitely A and P, and C is what bitcoin's design necessarily trades off to achieve it.
LN is an entirely different beast. The distributed systems part of it is the peer database and channels, and has weak consistency, it's availability is high, really only limited by network conditions and luck. It's partition resistance is very high, since, after all, in the case of executing a payment, literally every other node on the network could be down and it's still settled. LN uses routing and failure detection, two partition tolerancne strategies, where bitcoin uses consensus and replication.
reply