pull down to refresh

James A. Donald https://www.metzdowd.com/pipermail/cryptography/2008-November/014865.html jamesd at echeque.com Mon Nov 17 18:57:39 EST 2008 Previous message: Bitcoin P2P e-cash paper Next message: unintended? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] Ray Dillinger wrote:
Okay.... I'm going to summarize this protocol as I understand it.
I'm filling in some operational details that aren't in the paper by supplementing what you wrote with what my own "design sense" tells me are critical missing bits or "obvious" methodologies for use.
There are a number of significantly different ways this could be implemented. I have been working on my own version based on Patricia hash trees, (not yet ready to post, will post in a week or so) with the consensus generation being a generalization of file sharing using Merkle hash trees. Patricia hash trees where the high order part of the Patricia key represents the high order part of the time can be used to share data that evolves in time. The algorithm, if implemented by honest correctly functioning peers, regularly generates consensus hashes of the recent past - thereby addressing the problem I have been complaining about - that we have a mechanism to protect against consensus distortion by dishonest or malfunctioning peers, which is useless absent a definition of consensus generation by honest and correctly functioning peers.
First, people spend computer power creating a pool of coins to use as money. Each coin is a proof-of-work meeting whatever criteria were in effect for money at the time it was created. The time of creation (and therefore the criteria) is checkable later because people can see the emergence of this particular coin in the transaction chain and track it through all its "consensus view" spends. (more later on coin creation tied to adding a link).
When a coin is spent, the buyer and seller digitally sign a (blinded) transaction record, and broadcast it to a bunch of nodes whose purpose is keeping track of consensus regarding coin ownership.
I don't think your blinding works.
If there is a public record of who owns what coin, we have to generate a public diff on changes in that record, so the record will show that a coin belonged to X, and soon thereafter belonged to Y. I don't think blinding can be made to work. We can blind the transaction details easily enough, by only making hashes of the details public, (X paid Y for 49vR7xmwYcKXt9zwPJ943h9bHKC2pG68m) but that X paid Y is going to be fairly obvious.
If when Joe spends a coin to me, then I have to have the ability to ask "Does Joe rightfully own this coin", then it is difficult to see how this can be implemented in a distributed protocol without giving people the ability to trawl through data detecting that Joe paid me.
To maintain a consensus on who owns what coins, who owns what coins has to be public.
We can build a privacy layer on top of this - account money and chaumian money based on bitgold coins, much as the pre 1915 US banking system layered account money and bank notes on top of gold coins, and indeed we have to build a layer on top to bring the transaction cost down to the level that supports agents performing micro transactions, as needed for bandwidth control, file sharing, and charging non white listed people to send us communications.
So the entities on the public record are entities functioning like pre 1915 banks - let us call them binks, for post 1934 banks no longer function like that.
But if they recieve a longer chain while working, they immediately check all the transactions in the new links to make sure it contains no double spends and that the "work factors" of all new links are appropriate.
I am troubled that this involves frequent retransmissions of data that is already mostly known. Consensus and widely distributed beliefs about bitgold ownership already involves significant cost. Further, each transmission of data is subject to data loss, which can result in thrashing, with the risk that the generation of consensus may slow below the rate of new transactions. We already have problems getting the cost down to levels that support micro transactions by software agents, which is the big unserved market - bandwidth control, file sharing, and charging non white listed people to send us communications.
To work as useful project, has to be as efficient as it can be - hence my plan to use a Patricia hash tree because it identifies and locate small discrepancies between peers that are mostly in agreement already, without them needing to transmit their complete data.
We also want to avoid very long hash chains that have to be frequently checked in order to validate things. Any time a hash chain can potentially become enormously long over time, we need to ensure that no one ever has to rewalk the full length. Chains that need to be re-walked can only be permitted to grow as the log of the total number of transactions - if they grow as the log of the transactions in any one time period plus the total number of time periods, we have a problem.
Biggest Technical Problem:
Is there a mechanism to make sure that the "chain" does not consist solely of links added by just the 3 or 4 fastest nodes? 'Cause a broadcast transaction record could easily miss those 3 or 4 nodes and if it does, and those nodes continue to dominate the chain, the transaction might never get added.
To remedy this, you need to either ensure provable propagation of transactions, or vary the work factor for a node depending on how many links have been added since that node's most recent link.
Unfortunately, both measures can be defeated by sock puppets. This is probably the worst problem with your protocol as it stands right now; you need some central point to control the identities (keys) of the nodes and prevent people from making new sock puppets.
We need a protocol wherein to be a money tracking peer (an entity that validates spends) you have to be accepted by at least two existing peers who agree to synchronize data with you - presumably through human intervention by the owners of existing peers, and these two human approved synchronization paths indirectly connect you to the other peers in the network through at least one graph cycle.
If peer X is only connected to the rest of the network by one existing peer, peer Y, perhaps because X's directly connecting peer has dropped out, then X is demoted to a client, not a peer - any transactions X submits are relabeled by Y as submitted to Y, not X, and the time of submission (which forms part of the Patricia key) is the time X submitted them to Y, not the time they were submitted to X.
The algorithm must be able swiftly detect malfunctioning peers, and automatically exclude them from the consensus temporarily - which means that transactions submitted through malfunctioning peers do not get included in the consensus, therefore have to be resubmitted, and peers may find themselves temporarily demoted to clients, because one of the peers through which they were formerly connected to the network has been dropped by the consensus.
If a peer gets a lot of automatic temporary exclusions, there may be human intervention by the owners of those peers to which it exchanges data directly to permanently drop them.
Since peers get accepted by human invite, they have reputation to lose, therefore we can make the null hypothesis (the primary Bayesian prior) honest intent, valid data, but unreliable data transmission - trust with infrequent random verification. Designing the system on this basis considerably reduces processing costs.
Recall that SET died on its ass in large part because every transaction involved innumerable public key operations. Similarly, we have huge security flaws in https because it has so many redundant public key operations that web site designers try to minimize the use of https to cover only those areas that truly need it - and they always get the decision as to what truly needs it subtly wrong.
Efficiency is critical, particularly as the part of the market not yet served is the market for very low cost transactions.
If we solve the sock-puppet issue, or accept that there's a central point controlling the generation of new keys,
A central point will invite attack, will be attacked.
The problem with computer networked money is that the past can so easily be revised, so nodes come under pressure to adjust the past - "I did not pay that" swiftly becomes "I should not have paid that", which requires arbitration, which is costly, and introduces uncertainty, which is costly, and invites government regulation, which is apt to be utterly ruinous and wholly devastating.
For many purposes, reversal and arbitration is highly desirable, but there is no way anyone can compete with the arbitration provided by Visa and Mastercard, for they have network effects on their side, and they do a really good job of arbitration, at which they have vast experience, accumulated skills, wisdom, and good repute. So any new networked transaction system has to target the demand for final and irreversible transactions.
The idea of a distributed network consensus is that one has a lot of peers in a lot of jurisdictions, and once a transaction has entered into the consensus, undoing it is damn near impossible - one would have to pressure most of the peers in most of the jurisdictions to agree, and many of them don't even talk your language, and those that do, will probably pretend that they do not. So people will not even try.
To avoid pressure, the network has to avoid any central point at which pressure can be applied. Recall Nero's wish that Rome had a single throat that he could cut. If we provide them with such a throat, it will be cut.