Dryja introduced the idea of Discreet Log Contracts (DLC) in his
breakthrough work1.
Since then (DLC) has become an umbrella term for Bitcoin protocols
that map oracle secret revelation to an on-chain transaction which
apportions coins accordingly.
The key property that each protocol iteration preserves is that the
oracle is an oblivious trusted party -- they do not interact with
the blockchain and it is not possible to tell which event or which
oracle the two parties were betting on with blockchain data alone.
OP_CHECKTEMPLATEVERIFY (CTV) a.k.a. BIP119 2 is a proposed
upgrade to Bitcoin which is being actively discussed.
CTV makes possible an optimized protocol which improves DLC
performance so dramatically that it solves several user experience
concerns and engineering difficulties.
To me this is the most compelling and practical application of CTV so
I thought it's about time to share it!
The DLC specifications3 use adaptor signatures to condition each
possible payout.
The protocol works roughly like this:
Oracle(s) announce events along with a nonce R for each event.
Let's say each event has N outcomes.
Two users who wish to make a bet take the R from the oracle announcement and construct a set of attestation points S and their corresponding payouts.
Each attestation point for each of the N outcomes is calculated like S_i = R + H(R || X || outcome_i) * X where X is the oracle's static key.
The users combine the attestation points into contract execution transaction (CET) points e.g CET_i = S1_i + S2_i + S3_i.
Here CET_i is the conjunction (AND) between the event outcomes represented by S1_i, S2_i, S3_i.
The oracle(s) reveals the attestation s_i where s_i * G = S_i if the ith is the outcome transpired.
Either of the parties takes the s_is from each of the attestations and combines them e.g. cet_i = s1_i + s2_i + s3_i and uses cet_i to decrypt the CET adaptor signature encrypted by CET_i and broadcast the transaction.
E * N attestation points where E is the number of events you
are combining and N is the number of outcomes per event. (1 mul)
C >= E * N CET adaptor signatures and verify them. (2 mul -- or
with MuSig2, 3 muls).
Note that the number of CETs can be much greater than the number of
attestation points. For example,
if an oracle decomposes the price of BTC/USD into 20 binary digits
e.g. 0..(2^20 -1), you could have
E=20,N=2,C=2^20. So the biggest concern for worst case performance
is the adaptor signatures multiplications.
If we take a multiplication as roughly 50 microseconds computing
MuSig2 adaptor signatures for ~6000 CETs would take around a second of
cpu time (each) or so.
6000 CETs is by no means sufficient if you wanted, for example, to bet
on the BTC/USD price per dollar.
Note there may be various ways of precomputing multiplications and
using fast linear combination algorithms and so on but I hope this
provides an idea of the scale of the issue.
Then consider that you may want to use a threshold of oracles which
will combinatorially increase this number (e.g. 3/5 threshold would
10x this).
You also would end up sending data on the order of megabytes to each other.
What can we do with OP_CTV + Taproot to improve this?
Instead of creating an adaptor signature for every CET, commit to the
CET with OP_CTV in a tapleaf:
<CET-hash_i> CHECKTEMPLATEVERIFY <CET_i> CHECKSIG
When the oracle(s) reveals their attestations either party can combine
them to get the secret key
corresponding to CET_i and spend the coins to the CET (whose CTV
hash is CET-hash) which
distributes the funds according to the contract.
This replaces all the multiplications needed for the adaptor signature
with a few hashes!
You will still need to compute the CET_i which will involve a point
normalisation but it still brings the computational cost per CET down
from hundreds of microseconds to around 5 (per party).
There will be a bit more data on chain (and a small privacy loss) in
the uncooperative case but even with tens of thousands of outcomes
it's only going to roughly double the virtual size of the transaction.
Keep in mind the uncooperative case should hopefully be rare too esp
when we are doing this in channels.
The amount of data that the parties need to exchange is also reduced
to a small constant size.
The improvement here is that the amount of computation and
communication does not increase with the number of oracles in the
threshold.
The size of the witness only increases linearly in the number of
oracles and only in the un-cooperative case.
This also solves a security issue with the current spec because
attestation points from different oracles are no longer summed (which
is a problem 4).
It's possible to get rid of the EC multiplications from the
attestation point computation too.
This is optimistically a 10x improvement but in the most important
cases it's a negligible improvement since computing the E*N
attestion points is a small fraction of the total CET point
computation.
Recall the original Schnorr style DLC attestation point was computed like:
S_i = R + H(R || X || outcome_i) * X
So for each outcome we have to hash it and multiply the result by the
oracle's public key.
I don't think hashing is necessary6.
First note that an oracle attestation scheme is not a signature scheme:
The users need to be able to compute the attestation point beforehand (signature schemes do not require the verifier to be able to compute anything before hand).
There is a very different concept of a forgery -- you don't care about someone being able to forge signatures under the oracle's key in general you only care about them being able to forge an attestation corresponding to some previously announced event i.e. you only care about forgeries of things that people are actually betting on.
Long story6 short we can get rid of the hash and do the following
instead for the outcome_i:
S_i = R + i * X
For each outcome_i the oracle will reveal a different linear
combination of R and X.
However, if we still want to preserve the ability to add attestation
points together to create an AND like condition for points
attestations from the same oracle so we have to do:
S_i = i * R + X
which when we combine two attestations from the same oracle becomes:
Which loses the structure and creates collisions e.g. S1_1 + S2_2 = S1_2 + S2_1 .
Note that this collision problem also exists in the current spec and
original paper[4,5] but requires a solving a hashing k-sum that should
be hard to do in practice.
So, when we compute for i in 1..N, S_1 = R + X and each subsequent
is S_i = S_{i-1} + R and so we only need to do one addition for each
attestation point.
In the worst case this improves DLC performance by ~30x compared to
using MuSig2 adaptor signatures because it gets rid of all
multiplications for both parties.
In the case of a 3/5 threshold performance would be improved by another 10x.
Depending on the kind of event, removing the attestation point
multiplication will also help.
Communication complexity also becomes constant.
In other words, from the user's perspective everything can happen
pretty much instantly even on more resource constrained devices and
bad internet connections.
The downside of the protocol is that in the un-cooperative case, the
size of the witness is bigger and the transaction is distinguishable
from other protocols (it's not longer scriptless).
Yeah this is awesome. Not having CTV in Bitcoin is an error Bitcoin can't afford, IMHO. It's more of a question of how at this point, and I don't think it's miner contentious so mid-as-well just go with Speedy Trial again.
Introduction
OP_CHECKTEMPLATEVERIFY
(CTV) a.k.a. BIP119 2 is a proposed upgrade to Bitcoin which is being actively discussed. CTV makes possible an optimized protocol which improves DLC performance so dramatically that it solves several user experience concerns and engineering difficulties. To me this is the most compelling and practical application of CTV so I thought it's about time to share it!Present state of DLC specification
R
for each event. Let's say each event hasN
outcomes.R
from the oracle announcement and construct a set of attestation pointsS
and their corresponding payouts.N
outcomes is calculated likeS_i = R + H(R || X || outcome_i) * X
whereX
is the oracle's static key.CET_i = S1_i + S2_i + S3_i
.CET_i
is the conjunction (AND
) between the event outcomes represented byS1_i, S2_i, S3_i
.s_i
wheres_i * G = S_i
if thei
th is the outcome transpired.s_i
s from each of the attestations and combines them e.g.cet_i = s1_i + s2_i + s3_i
and usescet_i
to decrypt the CET adaptor signature encrypted byCET_i
and broadcast the transaction.Performance issues with DLCs
E * N
attestation points whereE
is the number of events you are combining andN
is the number of outcomes per event. (1 mul)C >= E * N
CET adaptor signatures and verify them. (2 mul -- or with MuSig2, 3 muls).E=20,N=2,C=2^20
. So the biggest concern for worst case performance is the adaptor signatures multiplications.committing to each CET in a tapleaf with CHECKTEMPLATEVERIFY
CET_i
and spend the coins to the CET (whose CTV hash isCET-hash
) which distributes the funds according to the contract.CET_i
which will involve a point normalisation but it still brings the computational cost per CET down from hundreds of microseconds to around 5 (per party). There will be a bit more data on chain (and a small privacy loss) in the uncooperative case but even with tens of thousands of outcomes it's only going to roughly double the virtual size of the transaction. Keep in mind the uncooperative case should hopefully be rare too esp when we are doing this in channels.getting rid of combinatorial complexity of oracle thresholds
Getting rid of the attestation point multiplication
E*N
attestion points is a small fraction of the total CET point computation.outcome_i
:outcome_i
the oracle will reveal a different linear combination ofR
andX
. However, if we still want to preserve the ability to add attestation points together to create an AND like condition for points attestations from the same oracle so we have to do:S1_i + S2_j = (i*R1 + X) + (j*R2 + X) = i*R1 + j*R2 + 2*X
S1_i + S2_j = (i*X + R1 + (j*X + R2) = (i + j)*X + R1 + R2)
S1_1 + S2_2 = S1_2 + S2_1
. Note that this collision problem also exists in the current spec and original paper[4,5] but requires a solving a hashing k-sum that should be hard to do in practice.i in 1..N
,S_1 = R + X
and each subsequent isS_i = S_{i-1} + R
and so we only need to do one addition for each attestation point.In summary
Credits
R + i*X
was broken.