# Jeremy Rubin's Blog

Here you'll find an assorted mix of content from yours truly. I post about a lot of things, but primarily Bitcoin.

categories: Bitcoin, Shenzhen Journey.

## Day 1: Rubin's Bitcoin Advent Calendar

What says Christmas more than an Advent Calendar to count the days till Santa comes? Honestly, I’m not too sure, I’m a Jew. Happy Hanukkah everyone! But in the spirit of the season, I figured the community would love a series of blog posts (one a day) discussing the future of Bitcoin and Smart Contracts.

You can find an index of all the posts here.

So here’s how it’s going to work:

## Today’s the first day!

Advent Calendars are designed to be from the 4th Sunday before Christmas till Christmas and as such vary in length. Lucky you; this year is a long one! Unfortunately for me, I’ve got to make 26 more exciting posts to pull this off. Each of these posts is going to be short-but-sweet (much like the chocolates you’re used to) and designed to highlight an important concept or idea about Bitcoin Smart Contracting. I’ll put each post on my personal blog, email out a link on the judica newsletter, and tweet it out.

## This series is for you.

It doesn’t matter if you’re a programmer, investor, pleb, or just trying to learn more. Through the series I’ll do my best to thoroughly introduce concepts for anyone to follow along and learn.

## Of course I’m going to be a little biased.

The perspectives shared are my own and the focus is on things that I focus on, but I’ll do my best to present the balance and nuance!

If you want to send me some holiday cheer: 3E6p1UgrgwAFvZAF7xUiRcBR2vAEdYNXjZ

Hope you enjoy the series!

The other day I was writing some tests for BIP-119 (shoutout Gloria for the detailed feedback on improving tests). I noticed something peculiar while attempting to write static test vectors for CTV. This peculiar thing led me to discover a minor flaw in Bitcoin’s interpreter – it isn’t going to break anything in the short term, but it has implications for how certain upgrades might be done in the future.

In the interpreter we pass specific flags in at different times to check different rules at different times. This is used because we generally want the Mempool to be “restrictive” and block validation to be unrestrictive. That sounds like the opposite of what you would want, but it’s because we want to ensure that we never break a consensus rule, so our mempool is “strict” to protect e.g. a miner from making a bad block, because our node’s understanding of consensus validation is less strict so we always know the mempool is full of stuff that will pass consensus.

One of the specific types of “stricter” that is in the mempool is for things that may be changed in the future. For example, Taproot (a change proposed to Bitcoin) uses a Witness V1 script. Before Taproot activates, Witness V1 Scripts are always valid no matter if they’re signed or not. After it activates, a new rule takes effect in consensus, and Witness V1 Scripts will be processed in accordance with Taproot’s rules. Because the Mempool is stricter, it never lets in any Witness V1 script spends until it knows how to properly validate it. That way, for a miner who doesn’t want to upgrade to Taproot, they can use the old rules in their Mempool and not ever mine a bad block.

One of the flags used for this purpose is DISCOURAGE_UPGRADABLE_NOPS. A NOP is simply an opcode in bitcoin that has no effect (nada). In the future, someone could add a rule to that NOP (e.g., check that the stack args present when the NOP executes satisfy some properties or the transaction is invalid, but do not remove anything from the stack so that the old consensus rules still seem correct). This is sufficient for consensus, but maybe people have decided that they want to create a bunch of outputs with NOPs in it because they are cute. Then, a fork that would add new semantics to a NOP would have the impact of locking people out of their wallets. To prevent this, the Mempool uses the rule DISCOURAGE_UPGRADABLE_NOPS which makes it so that if you try to broadcast an output script with a NOP it gets bounced from the Mempool (but not consensus of course, should a deviant miner mine such a transaction). Hopefully our users get the message to not use NOPs because we… discourage upgradable nops.

CheckSequenceVerify (CSV) was one such NOP before it grew up to be a big n’ important opcode. Essentially all that CSV does is check that the sequence field is set in a particular manner. This lets you set relative block and time lock (e.g., takes this much time before a coin is spendable again). However, it’s possible that we might come up with new kinds of lock times in the future, so we have a bit we can set in the sequence that makes it ignored for consensus purposes. Maybe in the future, someone would find something nice to do with it, eh?

This is the sequence verification code:

case OP_CHECKSEQUENCEVERIFY:
{
if (!(flags & SCRIPT_VERIFY_CHECKSEQUENCEVERIFY)) {
// not enabled; treat as a NOP3
break;
}

if (stack.size() < 1)
return set_error(serror, SCRIPT_ERR_INVALID_STACK_OPERATION);

// nSequence, like nLockTime, is a 32-bit unsigned integer
// field. See the comment in CHECKLOCKTIMEVERIFY regarding
// 5-byte numeric operands.
const CScriptNum nSequence(stacktop(-1), fRequireMinimal, 5);

// In the rare event that the argument may be < 0 due to
// some arithmetic being done first, you can always use
// 0 MAX CHECKSEQUENCEVERIFY.
if (nSequence < 0)
return set_error(serror, SCRIPT_ERR_NEGATIVE_LOCKTIME);

// To provide for future soft-fork extensibility, if the
// operand has the disabled lock-time flag set,
// CHECKSEQUENCEVERIFY behaves as a NOP.
if ((nSequence & CTxIn::SEQUENCE_LOCKTIME_DISABLE_FLAG) != 0)
break;

// Compare the specified sequence number with the input.
if (!checker.CheckSequence(nSequence))
return set_error(serror, SCRIPT_ERR_UNSATISFIED_LOCKTIME);

break;
}


Spot anything funky? Look closer…

    // To provide for future soft-fork extensibility, if the
// operand has the disabled lock-time flag set,
// CHECKSEQUENCEVERIFY behaves as a NOP.
if ((nSequence & CTxIn::SEQUENCE_LOCKTIME_DISABLE_FLAG) != 0)
break;


Here, where we say it behaves as a NOP we don’t check any rules and skip the checks.

See where the problem lies? If we ever did get around to a future upgrade here, then old miners who refuse to upgrade would be more than happy to accept invalid transactions into their mempool, and then following the fork, would end up mining invalid blocks leading to potential network partitions.

That would be bad! Let’s not do that.

What we really should be doing is:

    // To provide for future soft-fork extensibility, if the
// operand has the disabled lock-time flag set,
// CHECKSEQUENCEVERIFY behaves as a NOP.
if ((nSequence & CTxIn::SEQUENCE_LOCKTIME_DISABLE_FLAG) != 0) {
break;
}


Which is exactly what I propose to do in this PR.

If this solution is adopted, then after the last release of the Bitcoin Core Implementation that has the unpatched code goes End-of-Life, we could safely deploy new sequence rules. Because it takes a while for software to go EOL, I hope we can patch this soon.

## Infrastructure Bill: It's Go Time for Radical Self Custody

The infrastructure bill draft has been circulating which contains language that would have massive impact for the crypto ecosystem (and Bitcoin) in the United States, and most likely globally. The broad implication of the proposed bill is that many types of service provider would be categorized as brokers, even if fully ‘non custodial’. E.g., a coinjoin coordinator might be a broker, even if they never take control of the funds, because they are facilitating a transaction. There’s a lot of nuance, and the language is still being changed, so we’ll see where it lands. But that’s not the point of this blog post.

The point of this blog post is that we need to hurry the fuck up and improve the self-sovereign software available and widely used by bitcoiners. You heard me right, hurry the fuck up.

While there’s space for debate around perfect designs and optima for protocol improvements, these discussions take years to turn into code running in end users wallets. I do not believe that we have time to leisurely improve self-sovereign custody solutions while regulators figure out a wrench to throw in our spokes.

Why am I so concerned about this bill in particular? A confidential source tells me that this language came out of the blue, an executive branch driven regulatory ninja attack of sorts. Normally, when the government looks to regulate an industry, the provisions and terms get floated around by legislators for a long while with industry input, comment periods, and more. Then, when a bill or other rules get passed, it’s something that the industry has at least had a chance to weigh in on and prepare for. My source claims no one has seen the clauses in the infrastructure bill before, and they infer that may mean this is a part of a broader crack-down coming from specific political personalities and agencies. This means we may be seeing government actions further restricting users’ rights in the pipeline much sooner than anyone could anticipate.

I’ve long been saying that we should be deploying BIP-119 CTV for congestion control before we see broad congestion on the network. If you wait until a problem is manifest, it can take years to deploy a solution. This merits proactivity in solving a problem before it comes. Today, the need to improve self-custody looms urgently on the horizon.

CTV is not a panacea solution. It doesn’t magically fix all custodial issues. But, along with Sapio, it does offer a pathway to dramatically improving self custody options, letting users customize vault smart contracts which do not depend on any third parties. Deploying CTV now is an opportunity to put in motion the wheels for broad ecosystem support for these enhanced custody protocols. We may come up with better options in the future which may obsolete CTV in place of more clever technologies. I cheer those efforts. But we need solutions for Tomorrow.

A soft fork activation for CTV could be deployable for Bitcoin imminently, should the community embrace it. The spec is nearly 2 years old, the code has required only small updates to be mergeable with other changes to Bitcoin Core. The review burden is 185 lines of consensus code, and a couple hundred lines of tests. To that end I believe it is prudent for the Bitcoin community to embrace the deployment of CTV and I’m calling on the community to soft-signal intent for a soft-fork activation of CTV.

We cannot control what rules state authorities attempt to mandate. But we can individually control our own compliance with measures we see as unjust, and as a community we can advance technologies and solutions that ensure that choice remains squarely in the hands of every user and not the service providers they may use.

## BIP-118 What Gets Hashed Chart

As a part of my ongoing review of BIP-118 I put together a chart of what gets hashed under the current proposal.

Not tightly checked to be free of errors, but I figured such a chart would be helpful for folks evaluating BIP-118.

Perhaps the BIPs (generally, incl 34x) could be updated to present the information in such a chart – at least for me it’s much clearer than following a bunch of conditional logic (maybe if there’s ever desire for some consensus refactoring this could be a table in the code replacing the cond logic). A few highlighted nuances:

• input index is never signed (i previously thought one mode signed it). Key reuse under APOAS | Default and APOAS | All is a bit extra unsafe given susceptibility to the “half-spend” problem. This limits usability of APO for covenants a-la CTV because you can’t stop someone from adding inputs to your contract nor can you prevent half-spend problems when reusing addresses.
• APO signs the Amounts, APOAS never does.
• APO signs both the SPK and the Tapleaf hash, meaning that APO binds itself to the entire script rather than just it’s fragment. There’s no setting which is “just this fragment”
• APO’s signature binds it to a specific script fragment within a taproot key, but not a specific script path
• the flag “default” is not really a flag at all – when default is used (as a or’d byte) there are different results than when default is inferred (by absence of a byte) (this is maybe a bitcoin core specific quirk).
• There are 16 different possible modes total, so all combinations of flags mean something (advisable or not as with ACP | None)
• *| Default and *| All overlap, so there’s an opportunity to either reserve or assign 4 additional sighash modes if desired. These could cover some of the gaps above, or be saved for future purposes rather than be wasted now. Another point of interest is – not to rock the boat – but because BIP-118 is defining a new key type we could do away with the notion that sighash flags are “flags” and convert to an enum (e.g., numbered 0-256 for whatever combination of fields each would incur) and give each signature type a sensible name, rather than thinking of things as a combo of flags (e.g., APOAS is not some intersection of what APO and ACP do independently).

## Quantum Proofing Bitcoin with a CAT

#### no cats harmed in the making of this post

I recently published a blog post about signing up to a 5 byte value using Bitcoin script arithmetic and Lamport signatures.

By itself, this is neat, but a little limited. What if we could sign longer messages? If we can sign up to 20 bytes, we could sign a HASH160 digest which is most likely quantum safe…

What would it mean if we signed the HASH160 digest of a signature? What the what? Why would we do that?

Well, as it turns out, even if a quantum computer were able to crack ECDSA, it would yield revealing the private key but not the ability to malleate the content of what was actually signed. I asked my good friend and cryptographer Madars Virza if my intuition was correct, and he confirmed that it should be sufficient, but it’s definitely worth closer analysis before relying on this. While the ECDSA signature can be malleated to a different, negative form, if the signature is otherwise made immalleable there should only be one value the commitment can be opened to.

If we required the ECDSA signature be signed with a quantum proof signature algorithm, then we’d have a quantum proof Bitcoin! And the 5 byte signing scheme we discussed previously is a Lamport signature, which is quantum secure. Unfortunately, we need at least 20 contiguous bytes… so we need some sort of OP_CAT like operation.

OP_CAT can’t be directly soft forked to Segwit v0 because it modifies the stack, so instead we’ll (for simplicity) also show how to use a new opcode that uses verify semantics, OP_SUBSTRINGEQUALVERIFY that checks a splice of a string for equality.

Fun Fact: OP_CAT existed in Bitcoin untill 2010, when Satoshi “secretly” forked out a bunch of opcodes. So in theory the original Bitcoin implementation supported Post Quantum cryptography out of the box!

... FOR j in 0..=5
<0>
... FOR i in 0..=31
SWAP hash160 DUP <H(K_j_i_1)> EQUAL IF DROP <2**i> ADD ELSE <H(K_j_i_0)> EQUALVERIFY ENDIF
... END FOR
TOALTSTACK
... END FOR

DUP HASH160

... IF CAT AVAILABLE
FROMALTSTACK
... FOR j in 0..=5
FROMALTSTACK
CAT
... END FOR
EQUALVERIFY
... ELSE SUBSTRINGEQUALVERIFY AVAILABLE
... FOR j in 0..=5
FROMALTSTACK <0+j*4> <4+j*4> SUBSTRINGEQUALVERIFY DROP DROP DROP
...  END FOR
DROP
... END IF

<pk> CHECKSIG


That’s a long script… but will it fit? We need to verify 20 bytes of message each bit takes around 10 bytes script, an average of 3.375 bytes per number (counting pushes), and two 21 bytes keys = 55.375 bytes of program space and 21 bytes of witness element per bit.

It fits! 20*8*55.375 = 8860, which leaves 1140 bytes less than the limit for the rest of the logic, which is plenty (around 15-40 bytes required for the rest of the logic, leaving 1100 free for custom signature checking). The stack size is 160 elements for the hash gadget, 3360 bytes.

This can probably be made a bit more efficient by expanding to a ternary representation.

        SWAP hash160 DUP <H(K_j_i_0)> EQUAL  IF DROP  ELSE <3**i> SWAP DUP <H(K_j_i_T)> EQUAL IF DROP SUB ELSE <H(K_j_i_1)> EQUALVERIFY ADD  ENDIF ENDIF


This should bring it up to roughly 85 bytes per trit, and there should be 101 trits (log(2**160)/log(3) == 100.94), so about 8560 bytes… a bit cheaper! But the witness stack is “only” 2121 bytes…

As a homework exercise, maybe someone can prove the optimal choice of radix for this protocol… My guess is that base 4 is optimal!

## Taproot?

What about Taproot? As far as I’m aware the commitment scheme (Q = pG + hash(pG || m)G) can be securely opened to m even with a quantum computer (finding q such that qG = Q might be trivial, but suppose key path was disabled, then finding m and p such that the taproot equation holds should be difficult because of the hash, but I’d need to certify that claim better). Therefore this script can nest inside of a Tapscript path – Tapscript also does not impose a length limit, 32 byte hashes could be used as well.

Further, to make keys reusable, there could be many Lamport keys comitted inside a taproot tree so that an address could be used for thousands of times before expiring. This could be used as a measure to protect accidental use rather than to support it.

Lastly, Schnorr actually has a stronger non-malleability property than ECDSA, the signatures will be binding to the approved transaction and once Lamport signed, even a quantum computer could not steal the funds.