pull down to refresh

Satoshi Nakamoto https://www.metzdowd.com/pipermail/cryptography/2008-November/014853.html satoshi at vistomail.com Fri Nov 14 13:55:35 EST 2008 Previous message: unintended? Next message: Bitcoin P2P e-cash paper Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] Hal Finney wrote:
I think it is necessary that nodes keep a separate pending-transaction list associated with each candidate chain. ... One might also ask ... how many candidate chains must a given node keep track of at one time, on average?
Fortunately, it's only necessary to keep a pending-transaction pool for the current best branch. When a new block arrives for the best branch, ConnectBlock removes the block's transactions from the pending-tx pool. If a different branch becomes longer, it calls DisconnectBlock on the main branch down to the fork, returning the block transactions to the pending-tx pool, and calls ConnectBlock on the new branch, sopping back up any transactions that were in both branches. It's expected that reorgs like this would be rare and shallow.
With this optimisation, candidate branches are not really any burden. They just sit on the disk and don't require attention unless they ever become the main chain.
Or as James raised earlier, if the network broadcast is reliable but depends on a potentially slow flooding algorithm, how does that impact performance?
Broadcasts will probably be almost completely reliable. TCP transmissions are rarely ever dropped these days, and the broadcast protocol has a retry mechanism to get the data from other nodes after a while. If broadcasts turn out to be slower in practice than expected, the target time between blocks may have to be increased to avoid wasting resources. We want blocks to usually propagate in much less time than it takes to generate them, otherwise nodes would spend too much time working on obsolete blocks.
I'm planning to run an automated test with computers randomly sending payments to each other and randomly dropping packets.
  1. The bitcoin system turns out to be socially useful and valuable, so that node operators feel that they are making a beneficial contribution to the world by their efforts (similar to the various "@Home" compute projects where people volunteer their compute resources for good causes).
In this case it seems to me that simple altruism can suffice to keep the network running properly.
It's very attractive to the libertarian viewpoint if we can explain it properly. I'm better with code than with words though.
Satoshi Nakamoto
James A. Donald https://www.metzdowd.com/pipermail/cryptography/2008-November/014861.html jamesd at echeque.com Sat Nov 15 19:00:04 EST 2008 Previous message: Bitcoin P2P e-cash paper Next message: Bitcoin P2P e-cash paper Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] Satoshi Nakamoto wrote:
Fortunately, it's only necessary to keep a pending-transaction pool for the current best branch.
This requires that we know, that is to say an honest well behaved peer whose communications and data storage is working well knows, what the current best branch is - but of course, the problem is that we are trying to discover, trying to converge upon, a best branch, which is not easy at the best of times, and becomes harder when another peer is lying about its connectivity and capabilities, and yet another peer has just had a major disk drive failure obfuscated by a software crash, and the international fibers connecting yet a third peer have been attacked by terrorists.
When a new block arrives for the best branch, ConnectBlock removes the block's transactions from the pending-tx pool. If a different branch becomes longer
Which presupposes the branches exist, that they are fully specified and complete. If they exist as complete works, rather than works in progress, then the problem is already solved, for the problem is making progress.
Broadcasts will probably be almost completely reliable.
There is a trade off between timeliness and reliability. One can make a broadcast arbitrarily reliable if time is of no consequence. However, when one is talking of distributed data, time is always of consequence, because it is all about synchronization (that peers need to have corresponding views at corresponding times) so when one does distributed data processing, broadcasts are always highly unreliable Attempts to ensure that each message arrives at least once result in increased timing variation. Thus one has to make a protocol that is either UDP or somewhat UDP like, in that messages are small, failure of messages to arrive is common, messages can arrive in different order to the order in which they were sent, and the same message may arrive multiple times. Either we have UDP, or we need to accommodate the same problems as UDP has on top of TCP connections.
Rather than assuming that each message arrives at least once, we have to make a mechanism such that the information arrives even though conveyed by messages that frequently fail to arrive.
TCP transmissions are rarely ever dropped these days
People always load connections near maximum. When a connection is near maximum, TCP connections suffer frequent unreasonably long delays, and connections simply fail a lot - your favorite web cartoon somehow shows it is loading forever, and you try again, or it comes up with a little x in place of a picture, and you try again
Further very long connections - for example ftp downloads of huge files, seldom complete. If you try to ftp a movie, you are unlikely to get anywhere unless both client and server have a resume mechanism so that they can talk about partially downloaded files.
UDP connections, for example Skype video calls, also suffer frequent picture freezes, loss of quality, and so forth, and have to have mechanisms to keep going regardless.
It's very attractive to the libertarian viewpoint if we can explain it properly. I'm better with code than with words though.
No, it is very attractive to the libertarian if we can design a mechanism that will scale to the point of providing the benefits of rapidly irreversible payment, immune to political interference, over the internet, to very large numbers of people. You have an outline and proposal for such a design, which is a big step forward, but the devil is in the little details.
I really should provide a fleshed out version of your proposal, rather than nagging you to fill out the blind spots.
reply
Satoshi Nakamoto https://www.metzdowd.com/pipermail/cryptography/2008-November/014863.html satoshi at vistomail.com Mon Nov 17 12:24:43 EST 2008 Previous message: Bitcoin P2P e-cash paper Next message: Bitcoin P2P e-cash paper Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] James A. Donald wrote:
Fortunately, it's only necessary to keep a pending-transaction pool for the current best branch.
This requires that we know, that is to say an honest well behaved peer whose communications and data storage is working well knows, what the current best branch is -
I mean a node only needs the pending-tx pool for the best branch it has. The branch that it currently thinks is the best branch. That's the branch it'll be trying to make a block out of, which is all it needs the pool for.
Broadcasts will probably be almost completely reliable.
Rather than assuming that each message arrives at least once, we have to make a mechanism such that the information arrives even though conveyed by messages that frequently fail to arrive.
I think I've got the peer networking broadcast mechanism covered.
Each node sends its neighbours an inventory list of hashes of the new blocks and transactions it has. The neighbours request the items they don't have yet. If the item never comes through after a timeout, they request it from another neighbour that had it. Since all or most of the neighbours should eventually have each item, even if the coms get fumbled up with one, they can get it from any of the others, trying one at a time.
The inventory-request-data scheme introduces a little latency, but it ultimately helps speed more by keeping extra data blocks off the transmit queues and conserving bandwidth.
You have an outline and proposal for such a design, which is a big step forward, but the devil is in the little details.
I believe I've worked through all those little details over the last year and a half while coding it, and there were a lot of them. The functional details are not covered in the paper, but the sourcecode is coming soon. I sent you the main files. (available by request at the moment, full release soon)
Satoshi Nakamoto
reply