DLCVM assumes a trusted the party will run the computation honestly.
We propose utilizing oracle attestations in DLCs with a different goal in mind: as the basis for a VM in Bitcoin. ... we first construct a DLC...and arrange for an oracle to attest to the result of [some] computation. ... the DLC adaptor signature authorizing the correct execution could be ...[a] multi-party computational process
Other than suggesting the use of multi-party computation (basically a fancy multisig), they explicitly defer any attempt to ensure that the oracle is honest:
How the oracle for a given [contract] is to be implemented is left to the reader as it is outside the scope of this paper.
Basically they suggest that Alice can pay Bob on the condition that a group of one or more people "authorizes" the transaction, and this group is trusted to authorize Alice's transaction only if they (the group) run a computer program agreed on by Alice, Bob, and the group.
This generalizes very well but it's all based on trust. The authorizers can collude with Bob to steal Alice's money without running the computation. DLC's ordinary "trust assumption" is that the oracle (or in this case the multisig) is not a party to the contract, so it has no incentive to misbehave, but Bob can always inform the oracle about the contract in order to bribe them to let him have the money at stake. Users of a DLCVM must simply trust that the oracle they use is honest and will not take bribes.
you absolutely missed the joke
reply
The one way to mitigate this is to use dumb oracles that only attest to simple things that have a strong consensus (i.e time intervals, stock price, sports scores).
They have to be ubiquitous and just be a dumb publisher of data. Then you can independently construct a contract via their attestations.
This wouldn't be much of a leap from what exists today with SuredBits and other oracle servers.
But when it comes to basic operations that you would want for a VM, like a boolean comparison of two strings, it gets complicated. For example:
a) You will have to interact with oracles and give them data to "prepare" to sign, which already compromises the integrity of the DLC.
b) You will have to use homomorphic encryption, so the oracle isn't tipped off to what is actually being computed.
b) You have to trust that oracle not to lie, which may or may not be provable depending on the data and computation.
c) You can try to use a frost musig of oracles to spread out the risk. But collusion is already a requirement in order to setup the musig, so you are still trusting the group.
So I would say that you could construct some type of state machine that is reliable and useful, and could be reasonably represented as a DLC. But it would really have to be something that makes sense, that can be represented non-interactively by dumb oracles, and works well within the limitations of deterministic computation.
reply
The one way to mitigate this is to use dumb oracles that only attest to simple things that have a strong consensus (i.e time intervals, stock price, sports scores). ... They have to be ubiquitous and just be a dumb publisher of data.
I don't think that mitigates the trust issue. [EDIT: Nevermind, see next paragraph.] Even if dumb oracles are ubiquitous, you never really know if a given oracle is really one of the dumb ones or just a wolf in sheep's clothing. You can pick a set of supposedly dumb oracles at random and hope for the best, but regardless, you're still trusting that if Bob tries to bribe them, at least one of them will stand firm.
Ok I just realized that does mitigate it. "1 of 15 is honest" is a mitigation compared to "8 of 15 is honest," for example. It mitigates by "reducing" the number of trusted third parties, which is wonderful. But it only reduces it to a number that remains greater than 1. At the end of the day, trusting that "1 of N is honest" is still trusting the integrity of a federation.
reply
I wish I was smart enough to understand a lot of you two wrote.
reply