pull down to refresh
21 sats \ 1 reply \ @nym OP 12 Apr \ parent \ on: The risk of working on social contracts and Bitcoin- Gregory Maxwell (2021) security
I think there’s been good progress on stabilising Bitcoin development — in 2015 through 2017 we were in a phase where people were seriously thinking of replacing Bitcoin’s developers — devs were opposing a quick blocksize increase, so the obvious solution was to replace them with people who weren’t opposed. If you think of Bitcoin as an experimental, payments-oriented, tech startup, that’s perhaps not a bad idea; but if you think of it a store of value it’s awful: you don’t get a reliable system by replacing experts because they think your plan is wrong-headed, and you don’t get a good store of value without a reliable system. But whatever grudges might show up now and then on twitter, that seems to be pretty thoroughly in the past, and there now seems to be much broader support for funding devs, and much better consensus on what development should happen (though perhaps only because people who disagree have moved to different projects, and new disagreements haven’t yet cropped up).
But while that might be near enough the 64x improvement to support today’s valuation, I think we probably need a lot more to be able to supported continued growth in adoption.
Hopefully this is buried enough to not accidentally become a lede, but I’m particularly optimistic about an as yet unannounced approach that DCI has been exploring, which (if I’ve understood correctly) aims to provide long term funding for a moderate sized team of senior devs and researchers to focus on keeping Bitcoin stable and secure — that is auditing code, developing tools to find and prevent bugs, and doing targeted research to help the white hats stay ahead in the security arms race. I’m not sure it will get off the ground or pass the test of time, and if it does, it will probably need to be replicated by other groups to avoid becoming worryingly centralising, but I think it’s a promising approach for supporting the next 8x improvement in security and robustness, and perhaps even some of the one after that.
I’ve also chatted briefly with Jeremy Rubin who has some interesting funding ideas for Judica — the idea being (again, if I haven’t misunderstood) to try to bridge the charitable/patronage model of a lot of funding of open source Bitcoin dev, with the angel funding approach that can generate more funds upfront by having a realistic possibility of ending up with a profitable business and thus a return on the initial funding down the road.
That seems much more blue-sky to me, but I think we’ll need to continue exploring out-there ideas in order to avoid centralisation by development-capture: that is, if we just expand on what we’re doing now, we may end up where only a few companies (or individuals) have their quarterly bottom line directly affected by development funding, and are thus shouldering the majority of the burden while the rest of the economy more-or-less freeloads off them, and then having someone see an opportunity to exploit development control and decide to buy them all out. A mild example of this might be Red Hat’s purchase of CentOS (via an inverse-acquihire, I suppose you could call it), and CentOS’s recent strategy change that reduces its competition with Red Hat’s RHEL product.
(There are also a lot of interesting funding experiments in the DeFi/ethereum space in general, though so far I don’t think they feed back well into the “ongoing funding of robustness and security development work” goal I’m talking about here)
There are probably three “attacks” that I’m worred about at present, all related to the improvements above.
One is that the “modularisation” goal above implies a lot of code being moved around, with the aim of not really changing any behaviour. But because the code that’s being changed is complicated, it’s easy to change behaviour by accident, potentially introducing irritating bugs or even vulnerabilities. And because reviewers aren’t expecting to see behaviour changes, it can be hard to catch these problems: it’s perhaps a similar problem to semi-autonomous vehicles or security screening — most of the time everything is fine so it’s hard to ensure you maintain full attention to deal with the rare times when things aren’t fine. And while we have plenty of automated checks that catch wide classes of error, they’re still far from perfect. To me this seems like a serious avenue for both accidental bugs to slip through, and a risk area for deliberate vulnerabilities to be inserted by attackers willing to put in the time to establish themselves as Bitcoin contributors. But even with those risks, modularisation still seems a worthwhile goal, so the question is how best to minimise the risks. Unfortunately, beyond what we’re already doing, I don’t have good ideas how to do that. I’ve been trying to include “is this change really a benefit?” as a review question to limit churn, but it hasn’t felt very effective so far.
Another potential attack is against code review — it’s an important part of keeping Bitcoin correct and secure, and it’s one that doesn’t really scale that well. It doesn’t scale for a few reasons — a simple one is that a single person can only read so much code a day, but another factor is that any patch can have subtle impacts that only arise because of interactions with other code that’s not changing, and being aware of all the potential subtle interactions in the codebase is very hard, and even if you’re aware of the potential impacts, it can take time to realise what they are. Having more changes thus is one problem, but dividing review amongst more people is also a problem: it lowers the chance that a patch with a subtle bug will be reviewed by someone able to realise that some subtle bug even exists. Similarly, having development proceed quickly and efficiently is not always a win here: it reduces the time available to realise there’s a problem before the change is merged and people move on to thinking about the next thing. Modularisation helps here at least: it substantially reduces the chance of interactions with entirely different parts of the project, though of course not entirely. CI also helps, by automating review of classes of potential issues. I think we already do pretty well here with consensus code: there is a lot of review, and things progress slowly; but I do worry about other areas. For example, I was pretty surprised to see PR#20624 get proposed on a Friday and merged on Monday (during the lead up to Christmas no less); that’s the sort of change that I could easily see introducing subtle bugs that could have serious effects on p2p connectivity, and I don’t think it’s the sort of huge improvement that justifies a merge-first-review-later approach.
The final thing I worry about is the risk that attackers might try subtler ways of “firing the devs” than happened last time. After all, if you can replace all the people who would’ve objected to what you want to do, there’s no need to sneak it in and hope no one notices in review, you can just do it, and even if you don’t get rid of everyone who would object you at least lower the chances that your patch will get a thorough review by whoever remains. There are a variety of ways you can do that. One is finding way of making contributing unpleasant enough that your targets just leave on their own: constant arguments about things that don’t really matter, slowing down progress so it feels like you’re just wasting time, and personal attacks in the media (or on social media), for instance. Another is the cancel-culture approach of trying to make them a pariah so no one else will have anything to do with them. Or there’s the potential for court cases (cf Angela Walch’s ideas on fiduciary duties for developers) or more direct attempts at violence.
I don’t think there’s a direct answer to this — even if all of the above fail, you could still get people to leave by offering them bunches of money and something interesting to do instead, for example. Instead, I think the best defense is more cultural: that is having a large group of contributors, with strong support for common goals (eg decentralisation, robustness, fixed supply, not losing peoples funds, not undoing transactions) that’s also diverse enough that they’re not all vulnerable to the same attacks.
One of the risks of funding most development in much the same way is that it’s encourages conformity rather than diversity — an obvious rule for getting sponsored is “don’t bite the hand that feeds you” — eg, BitMEX’s Developer Grant Agreement includes “Not undertaking activities that are likely to bring the reputation of … the Grantor into disrepute”. And I don’t mean to criticise that: it’s a natural consequence of what a grant is. But if everyone working on Bitcoin is directly incentivised to follow that rule, what happens when you need a whistleblower to call out bad behaviour?
Of course, perhaps this is already fine, because there are enough devs who’ll happily quit their jobs if needed, or enough devs who have already hit their FU-money threshold and aren’t beholden to anyone?
Of course, perhaps this is already fine, because there are enough devs who’ll happily quit their jobs if needed, or enough devs who have already hit their FU-money threshold and aren’t beholden to anyone?
To me though, I think it’s a bit of a red flag that LukeDashjr hasn’t gotten one of these funding gigs — I know he’s applied for a couple, and he should superficially be trivially qualified: he’s a long time contributor, he’s been influential in calling out problems with BIP16, in making segwit deployment feasible, in avoiding some of the possible disasters that could have resulted from the UASF activation of segwit, and in working out how to activate taproot, and he’s one of the people who’s good at spotting subtle interactions that risk bugs and vulnerabilities of the sort I talked about above. On the other hand he’s known for having some weird ideas and can be difficult to work with and maybe his expectations are unrealistic. What’s that add up to? Maybe he’s a test case for this exact attack on Bitcoin. Or maybe he’s just had a run of bad luck. Or maybe he just needs to sell himself better, or adopt a more business-friendly attitude — and I guess that’s the attitude to adopt if you want to solve the problem yourself rather than rely on someone else to help.
But… if we all did that, aren’t we hitting that exact “conformity” problem; and doesn’t that more or less leave everyone vulnerable to the “pariah” attack, exploitable by someone pushing your buttons until you overreact at something that’s otherwise innocuous, then tarring you as the sort of person that’s hard to work with, and repeating that process until that sticks, and no one wants to work with you?
While I certainly (and tautologically) like working with people who I like working with, I’m not sure there’s a need for devs to exclusively work with people they find pleasant, especially if the cost is missing things in review, or risking something of a vulnerable monoculture. On the other hand, I tend to think of patience as a virtue, and thus that people who test my patience are doing me a service in much the same way exams in school do — they show you where you’re at and what you need to work on — so it might also be that I’m overly tolerant of annoying people. And I did also list “making working on Bitcoin unenjoyable” as another potential attack vector. So I don’t know that there’s an easy answer. Maybe promoting Luke’s github sponsors page is the thing to do?
Anyway, conclusion.
Despite my initial thoughts above that taproot might be less of a priority this year in order to focus on robustness rather than growth, I think the “let wallets do more multisig so users funds are less likely to be lost” is still a killer feature, so I think that’s still #1 for me. I think trying to help with making p2p and mempool code be more resilient, more encapsulated and more testable might be #2, though I’m not sure how to mitigate the code churn risk that creates. I don’t think I’m going to work much on CI/tests/static analysis, but I do think it’s important so will try to do more review to help that stuff move forward.
Otherwise, I’d like to get the anyprevout patches brought up to date and testable. In so far as that enables eltoo, which then allows better reliability of lightning channels, that’s kind-of a fit for the robustness theme (and robustness in general, I think, is what’s holding lightning back, and thus fits in with the “keep lightning growing at the same rate as Bitcoin, or better” goal as well). It’s hard to rate that as highly as robustness improvements at the base Bitcoin layer though, I think.
There are plenty of other neat technical things too; but I think this year might be one of those ones where you have to keep reminding yourself of a few fundamentals to avoid getting swept up in the excitement, so keeping the above as foundations is probably a good idea.
Otherwise, I’m hoping I’ll be able to continue supporting other people’s dev funding efforts — whether blue sky, or just keeping on with what’s working so far. I’m also hoping to do a bit more writing — my resolution last year was meant to be to blog more, and didn’t really work out, so why not double down on it? Probably a good start (aside from this post) would be writing a response to the Productivity Commission Right to Repair issues paper; I imagine there’ll probably be some more crypto related issues papers to respond to over this year too…
If for whatever reason you’re reading this looking for suggestions you might want to do rather than what I’m thinking about, here are some that come to my mind:
Money: consider supporting or hiring Luke, or otherwise supporting (or, if it’s in your wheelhouse, doing) Bitcoin dev work, or supporting MIT DCI, or funding/setting up something independent from but equally as good as MIT DCI or Chaincode (in increasing order of how much money we’re talking). If you’re a bank affected by the recent OCC letter on payments, making a serious investment in lightning dev might be smart.
Bitcoin code: help improve internal test coverage, static analysis, and/or build reproducibility; setup and maintain external tests; review code and find bugs in PRs before they get merged. Otherwise there’s a million interesting features to work on, so do that.
Lightning: get PTLCs working (using taproot on signet or ecdsa-based), anyprevout/eltoo, improve spam prevention. Otherwise, implementing and fine-tuning everything already on lightning’s TODO list.
Other projects: do more testing on signet in general, test taproot integration on signet (particularly for robustness features like multisig), monitor blockchain and mempool activity for oddities to help detect and prevent potential attacks asap.
(Finally, just in case it’s not already obvious or clear: these are what I think are priorities today, there’s not meant to be any implication that anything outside of these ideas shouldn’t be being worked on)
reply