Jeremy Rubin's Blog

Here you'll find an assorted mix of content from yours truly. I post about a lot of things, but primarily Bitcoin.

categories: Bitcoin, Shenzhen Journey.


Congestion Control

Day 12: Rubin's Bitcoin Advent Calendar

Welcome to day 12 of my Bitcoin Advent Calendar. You can see an index of all the posts here or subscribe at judica.org/join to get new posts in your inbox

Congestion is an ugly word, eh? When I hear it my fake synthesia triggers green slime feeling, being stuck in traffic with broken AC, and ~the bread line~ waiting for your order at a crowded restaurant when you’re super starving. All not good things.

So Congestion Control sounds pretty sweet right? We can’t do anything about the demand itself, but maybe we can make the experience better. We can take a mucinex, drive in the HOV lane, and eat the emergency bar you keep in your bag.

How might this be used in Bitcoin?

  1. Exchange collects N addresses they need to pay some bitcoin
  2. Exchange inputs into this contract
  3. Exchanges gets a single-output transaction, which they broadcast with high fee to get quick confirmation.
  4. Exchange distributes the redemption paths to all recipients (e.g. via mempool, email, etc).
  5. Users verify that the funds are “locked in” with this contract.
  6. Party
  7. Over time, when users are willing to pay fees, they CPFP pay for their redemptions (worst case cost \(O(\log N)\))

Throughout this post, we’ll show how to build the above logic in Sapio!


Before we get into that…

Talk Nerdy To Me

Let’s define some core concepts… Don’t worry too much if these are a bit hard to get, it’s just useful context to have or think about.

Latency

Latency is the time from some notion of “started” to “stopped”. In Bitcoin you could think of the latency from 0 confirmations on a transaction (in mempool) to 1 confirmation (in a block), which is minimally expected to be 10 minutes for high fee transactions, but could be longer depending on the other transactions.

Fairness

Fairness is a measure of how “equitable” a distribution of goods or services is. For example, suppose I want to divide 10 cookies among 10 children.

What if 1 child gets two cookies and the other 9 get 8/9ths of a cookie each? Or what if 1 child gets no cookie and the other 9 get 10/9ths of a cookie each? How fair is that?

Mathematicians and computer scientists love to come up with different measures of fairness to be able to quantatatively compare these scenarios and their relative fairness.

In Bitcoin we might think of different types of fairness: how long does your transaction spend in the mempool? How much fee did you pay?

Throughput & Capacity

Let’s spend another moment on fairness. Perfectly fair would be:

  1. All children get 1 cookie
  2. All children get 1/10th of 1 cookie.
  3. All children get 0 cookies.

Clearly only one of these is particularly efficient.

Thus, we don’t just want to measure fairness, we also want to measure the throughput against the capacity. The capacity is the maximum throughput, and the the throughput is essentially how many of those cookies get eaten (usually, over time). Now let’s look at our prior scenarios:

  1. All children get 1 cookie: Perfect Throughput.
  2. All children get 1/10th of 1 cookie: 1/10th Throughtput/Capacity.
  3. All children get 0 cookies: 0 Throughput :(

In this case it seems simple: why not just divide the cookies you big butt!

Well sometimes it’s hard to coordinate the sharing of these resources. For example, think about if the cookies had to be given out in a buffet. The first person might just take two cookies, not aware there were other kids who wouldn’t get one!

This maps well onto the Bitcoin network. A really rich group of people might do a bunch of relatively high fee transactions that are low importance to them and inadvertently price out lower fee transactions that are more important to the sender. It’s not malicious, just a consequence of having more money. So even though Bitcoin can achieve 1MB of base transaction data every 10 minutes, that capacity might get filled with a couple big consolidation transactions instead of many transfers.

Burst & Over Provisioning

One issue that comes up in systems is that users show up randomly. How often have you been at a restaurant with no line, you order your food, and then as soon as you sit down the line has ten people in it? Lucky me, you think. I showed up at the right time!. But then ten minutes later the line is clear.

Customers show up kind of randomly. And thus we see big bursts of activity. Typically, in order to accomodate the bursts a restaurant must over-provision it’s staff. They only make money when customers are there, and they need to serve them quickly. But in between bursts, staff might just be watching grass grow.

The same is true for Bitcoin. Transactions show up somewhat unpredictably, so ideally Bitcoin would have ample space to accomodate any burst (this isn’t true).

Little’s Law

Little’s law is a deceptively simple concept:

\[L = \lambda \times W\]

where \(L = \) length of the queue, \(\lambda = \) the arrival rate and \(W=\) the average time a customer spends in the system.

What’s remarkable about it is that it makes almost no assumptions about the underlying process.

This can be used to think about, e.g., a mempool.

Suppose there are 10,000 transactions in the mempool, and based on historical data we see 57 txns a minute.

\[\frac{10,000 \texttt{ minutes}}{57 \texttt{ transactions per minute}} = 175 \texttt{ minutes}\]

Thus we can infer how long transactions will on average spend waiting in the mempool, without knowing what the bursts look like! Very cool.

I’m just showing off

I didn’t really need to make you read that gobbledygook, but I think they are really useful concepts that anyone who wants to think about the impacts of congestion & control techniques should keep in mind… Hopefully you learned something!


It’s Bitcoin Time

Well, what’s going on in Bitcoin land? When we make a transaction there are multiple different things going on.

  1. We are spending coins
  2. We are creating new coins

Currently, those two steps occur simultaneously. Think of our cookies. Imagine if we let one kid get cookies at a time, and they also have to get their milk at the same time. Then we let the next kid go. It’s going to take

\[T_{milk} + T_{cookies}\]

To get everyone served. What if instead we said kids could get one and then the other, in separate lines.

Now it will take something closer to \(\max(T_{milk}, T_{cookies})\).1 Whichever process is longer will dominate the time. (Probably milk).

Now imagine that getting a cookie takes 1 second per child, and getting a milk takes 30 seconds. Everyone knows that you can have a cookie and have milk after. If children take a random amount of time – let’s say on average 3 minutes, sometimes more, sometimes less – to eat their cookies, then we can serve 10 kids cookies in 10 seconds, making everyone happy, and then fill up the milks while everyone is enjoying a cookie. However, if we did the opposite – got milks and then got cookies, it would take much longer for all of the kids to get something and you’d see chaos.

Back to Bitcoin. Spending coins and creating new coins is a bit like milk and cookies. We can make the spend correspond to distributing the cookies and setting up the milk line. And the creating of the new coin can be more akin to filling up milks whenever a kid wants it.

What this means practically is that by unbundling spending from redeeming we can serve a much greater number of users that if they were one aggregate product because we are taking the “expensive part” and letting it happen later than the “cheap part”. And if we do this cleverly, the “setting up the milk line” in the splitting of the spend allows all receivers to know they will get their fair share later.

This makes the system much higher throughput (unlimited confirmations of transfer), lower latency to confirmation (you an see when a spend will eventually pay you), but higher latency to coin creation in the best case, although potentially no different than the average case, and (potentially) worse overall throughput since we have some waste from coordinating the splitting.

It also improves costs because we may be willing to pay a higher price for part one (since it generates the confirmation) than part two.

Can we build it?

Let’s start with a basic example of congestion control in Sapio.

First we define a payment as just being an Amount and an Address.

/// A payment to a specific address
pub struct Payment {
    /// # Amount
    /// The amount to send in btc
    pub amount: AmountF64,
    /// # Address
    /// The Address to send to
    pub address: Address,
}

Next, we’ll define a helper called PayThese, which takes a list of contracts of some kind and pays them after an optional delay in a single transaction.

You can think of this (back to our kids) as calling a group of kids at a time (e.g., table 1, then table 2) to get their cookies.

struct PayThese {
    contracts: Vec<(Amount, Box<dyn Compilable>)>,
    fees: Amount,
    delay: Option<AnyRelTimeLock>,
}
impl PayThese {
    #[then]
    fn expand(self, ctx: Context) {
        let mut bld = ctx.template();
        // Add an output for each contract
        for (amt, ct) in self.contracts.iter() {
            bld = bld.add_output(*amt, ct.as_ref(), None)?;
        }
        // if there is a delay, add it
        if let Some(delay) = self.delay {
            bld = bld.set_sequence(0, delay)?;
        }
        // pay some fees
        bld.add_fees(self.fees)?.into()
    }

    fn total_to_pay(&self) -> Amount {
        let mut amt = self.fees;
        for (x, _) in self.contracts.iter() {
            amt += *x;
        }
        amt
    }
}
impl Contract for PayThese {
    declare! {then, Self::expand}
    declare! {non updatable}
}

Lastly, we’ll define the logic for congestion control. The basics of what is happening is we are going to define two transactions: One which pays from A -> B, and then one which is guaranteed in B’s script to pay from B -> {1…n}. This splits the confirmation txn from the larger payout txn.

However, we’re going to be a little more clever than that. We’ll apply this principle recursively to create a tree.

Essentially what we are going to do is to take our 10 kids and then divide them into groups of 2 (or whatever radix). E.g.: {1,2,3,4,5,6,7,8,9,10} would become { {1,2}, {3,4}, {5,6}, {7,8}, {9,10} }. The magic happens when we recursively apply this idea, like below:

{1,2,3,4,5,6,7,8,9,10}
{ {1,2}, {3,4}, {5,6}, {7,8}, {9,10} }
{ { {1,2}, {3,4} }, { {5,6}, {7,8} }, {9,10} }
{ { {1,2}, {3,4} }, { { { 5,6}, {7,8} }, {9,10} } }
{ { { {1,2}, {3,4}}, { { {5,6}, {7,8} }, {9,10} } } }

The end result of this grouping is a single group! So now we could do a transaction to pay/give cookies to that one group, and then if we wanted 9 to get their cookie/sats We’d only have to publish:

level 0 to: Address({ { { {1,2}, {3,4} }, { { {5,6}, {7,8} }, {9,10} } } })
level 1 to: Address({ { {5,6}, {7,8} }, {9,10} } })
level 2 to: Address({9,10})

Now let’s show that in code:

/// # Tree Payment Contract
/// This contract is used to help decongest bitcoin
//// while giving users full confirmation of transfer.
#[derive(JsonSchema, Serialize, Deserialize)]
pub struct TreePay {
    /// # Payments
    /// all of the payments needing to be sent
    pub participants: Vec<Payment>,
    /// # Tree Branching Factor
    /// the radix of the tree to build.
    /// Optimal for users should be around 4 or
    /// 5 (with CTV, not emulators).
    pub radix: usize,
    #[serde(with = "bitcoin::util::amount::serde::as_sat")]
    #[schemars(with = "u64")]
    /// # Fee Sats (per tx)
    /// The amount of fees per transaction to allocate.
    pub fee_sats_per_tx: bitcoin::util::amount::Amount,
    /// # Relative Timelock Backpressure
    /// When enabled, exert backpressure by slowing down
    /// tree expansion node by node either by time or blocks
    pub timelock_backpressure: Option<AnyRelTimeLock>,
}

impl TreePay {
    #[then]
        fn expand(self, ctx: Context) {
            // A queue of all the payments to be made initialized with
            // all the input payments
            let mut queue = self
                .participants
                .iter()
                .map(|payment| {
                    // Convert the payments to an internal representation
                    let mut amt = AmountRange::new();
                    amt.update_range(payment.amount);
                    let b: Box<dyn Compilable> =
                        Box::new(Compiled::from_address(payment.address.clone(),
                        Some(amt)));
                    (payment.amount, b)
                })
                .collect::<VecDeque<(Amount, Box<dyn Compilable>)>>();

            loop {
                // take out a group of size `radix` payments
                let v: Vec<_> = queue
                    .drain(0..std::cmp::min(self.radix, queue.len()))
                    .collect();
                if queue.len() == 0 {
                    // in this case, there's no more payments to make so bundle
                    // them up into a final transaction
                    let mut builder = ctx.template();
                    for pay in v.iter() {
                        builder = builder.add_output(pay.0, pay.1.as_ref(), None)?;
                    }
                    if let Some(timelock) = self.timelock_backpressure {
                        builder = builder.set_sequence(0, timelock)?;
                    }
                    builder = builder.add_fees(self.fee_sats_per_tx)?;
                    return builder.into();
                } else {
                    // There are still more, so make this group and add it to
                    // the back of the queue
                    let pay = Box::new(PayThese {
                        contracts: v,
                        fees: self.fee_sats_per_tx,
                        delay: self.timelock_backpressure,
                    });
                    queue.push_back((pay.total_to_pay(), pay))
                }
            }
    }
}
impl Contract for TreePay {
    declare! {then, Self::expand}
    declare! {non updatable}
}

So now what does that look like when we send to it? Let’s do a TreePay with 14 recipients and radix 4:

sapio studio view of treepay

As you can see, the queuing puts some structure into a batched payment! This is (roughly) the exact same code as above generating these transactions. What this also means is given an output and a description of the arguments passed to the contract, anyone can re-generate the expansion transactions and verify that they can eventually receive their money! These payout proofs can also be delivered in a pruned form, but that’s just a bonus.

Everyone gets their cookie (confirmation of transfer) immediately, and knows they can get their milk (spendability) later. A smart wallet could manage your liquidity over pedning redemptions, so you could passively expand outputs whenever fees are cheap.


There are a lot of extensions to this basic design, and we’ll see two really exciting ones tomorrow and the next day!

If you want to read more about the impact of congestion control on the network, I previously wrote two articles simulating the impact of congestion control on the network which you can read here:

What’s great about this is that not only do we make a big benefit for anyone who wants to use it, we show in the Batching Simulation that even with the overheads of a TreePay, the incentive compatible behavior around exchange batching can actually help us use less block space overall.

  1. Simplifying here – I know Amdahl’s Law… 



Inheritence Schemes for Bitcoin

Day 11: Rubin's Bitcoin Advent Calendar

Welcome to day 11 of my Bitcoin Advent Calendar. You can see an index of all the posts here or subscribe at judica.org/join to get new posts in your inbox

You are going to die.

Merry Christmas! Hopefully not any time soon, but one of these days you will shuffle off this mortal coil.

When that day comes, how will you give your loved ones your hard earned bitcoin?

You do have a plan, right?

This post is a continuation of the last post on Vaults. Whereas Vaults focus on trying to keep your coins away from someone, Inheritance focuses on making sure someone does get your coins. Basically opposites!

Basic Bitcoin Plans

Let’s say you’re a smarty pants and you set the following system up:

(2-of-3 Multisig of my keys) OR (After 1 year, 3-of-5 Multisig of my 4 family members keys and 1 lawyer to tie break)

Under this setup, you can spend your funds secured by a multisig. You have to spend them once a year to keep your greedy family away, but that’s OK.

Until one day, you perish in a boating accident (shouldn’t have gone to that Flamin’ Hot Cheetos Yach Party in Miami).

A year goes by, no one knows where your 2-of-3 keys are, and so the family’s backup keys go online.

They raid your files and find a utxoset backup with descriptors and know how to combine their keys (that you made for them most likely…) with offline signing devices to sign a PSBT, and the money comes out.

If the family can’t agree, a Lawyer who has your will can tie break the execution.

Except wait…

Your kids are assholes, just like your spouse

So your piece of shit husband/wife doesn’t think the kids should get anything (RIP college fund), so count them out on signing the tuition payments.

Now we’re down to your 3 kids agreeing and your 1 lawyer.

Your Lawyer thinks your spouse has a bit of a case, so the whole things in probate as far as they are concerned.

And the kids? Well, the kids don’t want to go to college. You just gifted them 42069 sats each, enough to pay for a ticket on Elon Musk’s spaceship. So they get together one night, withdraw all the money, and go to Mars. Or the Casino. Little Jimmy has never seen so much money, so he goes to Vegas for a last huzzah before the Mars trip, but he blows it all. So Jimmy stays behind, satless, and the other kids go to mars.

Well That Sucked

And it didn’t have to! What if you could express your last will and testament in Bitcoin transactions instead of in messy messy multisigs. You Can! Today! No new features required (although they’d sure be nice…).


Building Inheritence Schemes with Sapio

You can make inheritence schemes with Sapio! While it does benefit from having CTV enabled for various reasons, technically it can work decently without CTV by pre-signing transactions with a CTV emulator.

Here we’ll develop some interesting primitives that can be used to make various inheritence guarantees.

Making a better Dead Man Switch

First off, let’s make a better dead man switch. Recall we had to move our funds once a year because of the timelocks.

That was dumb.

Instead, let’s make a challenge of liveness! (again, deep apologies on these examples, I’m a bit behind on the series so haven’t checked as closely as I would usually…)

/// Opening state of a DeadManSwitch
#[derive(Clone)]
struct Alive {
    /// Key needed to claim I'm dead
    is_dead: bitcoin::PublicKey,
    /// If someone says i'm dead but I'm alive, backup wallet address
    is_live: bitcoin::Address,
    /// My normal spending key (note: could be a Clause instead...)
    key: bitcoin::PublicKey,
    /// How long you have to claim you're not dead
    timeout: RelTime,
    /// Addresses for CPFP Anchor Outputs
    is_dead_cpfp: bitcoin::Address,
    is_live_cpfp: bitcoin::Address,
}

impl Alive {
    #[guard]
    fn is_dead_sig(self, ctx: Context) {
        Clause::Key(self.is_dead.clone())
    }
    /// only allow the is_dead key to transition to a CheckIfDead 
    #[then(guarded_by="[Self::is_dead_sig]")]
    fn am_i_dead(self, ctx: Context) {
        let dust = Amount::from_sat(600);
        let amt = ctx.funds();
        ctx.template()
            // Send all but some dust to CheckIfDead
            .add_output(amt - dust, &CheckIfDead(self.clone()), None)?
            // used for CPFP
            .add_output(
                dust,
                &Compiled::from_address(self.is_dead_cpfp.clone(), None),
                None,
            )?
            .into()
    }
    /// Allow spending like normal
    #[guard]
    fn spend(self, ctx: Context) {
        Clause::Key(self.key.clone())
    }
}

impl Contract for Alive {
    declare! {finish, Self::spend}
    declare! {then, Self::am_i_dead}
}

/// All the info we need is in Alive struct already...
struct CheckIfDead(Alive);
impl CheckIfDead {
    /// we're dead after the timeout and is_dead key signs to take the money
    #[guard]
    fn is_dead(self, ctx: Context) {
        Clause::And(vec![Clause::Key(self.0.is_dead.clone()), self.0.timeout.clone().into()])
    }

    /// signature required for liveness claim
    #[guard]
    fn alive_auth(self, ctx: Context) {
        Clause::Key(self.key.clone())
    }
    /// um excuse me i'm actually alive
    #[then(guarded_by="[Self::alive_auth]")]
    fn im_alive(self, ctx: Context) {
        let dust = Amount::from_sat(600);
        let amt = ctx.funds();
        ctx.template()
            /// Send funds to the backup address!
            .add_output(
                amt - dust,
                &Compiled::from_address(self.0.is_live.clone(), None),
                None,
            )?
            /// Dust for CPFP-ing
            .add_output(
                dust,
                &Compiled::from_address(self.0.is_live_cpfp.clone(), None),
                None,
            )?
            .into()
    }
}

impl Contract for CheckIfDead {
    declare! {finish, Self::is_dead}
    declare! {then, Self::im_alive}
}

In this example, the funds start in a state of Alive, until a challenger calls Alive::am_i_dead or the original owner spends the coin. After the call of Alive::am_i_dead, the contract transitions to CheckIfDead state. From this state, the owner has timeout (either time or blocks) time to move the coin to their key, or else the claimer of the death can spend using CheckIfDead::is_dead.

Of course, we can clean up this contract in various ways (e.g., making the destination if dead generic). That could look something like this:

struct Alive {
    is_dead_cpfp: bitcoin::Address,
    is_live_cpfp: bitcoin::Address,
    // note that this permits composing Alive with some arbitrary function
    is_dead: &dyn Fn(ctx: Context, cpfp: bitcoin::Address) -> TxTmplIt,
    is_live: bitcoin::Address,
    key: bitcoin::PublicKey,
    timeout: RelTime,
}

impl CheckIfDead {
    #[then]
    fn is_dead(self, ctx: Context) {
        self.0.is_dead(ctx, self.0.is_dead_cpfp.clone())
    }
}

This kind of dead man switch is much more reliable than having slowly eroding timelocks since it doesn’t require regular transaction refreshing, which was the source of a bug in Blockstream’s federation code. It also requires an explicit action to claim a lack of liveness, which also gives information about the trustworthiness of your kids (or any exploits of their signers).

Not so fast

What if we want to make sure that little Jimmy and his gambling addiction don’t blow it all at once… Maybe if instead of giving Jimmy one big lump sum, we could give a little bit every month. Then maybe he’d be better off! This is basically an Annuity contract.

Now let’s have a look at an annuity contract.

struct Annuity {
    to: bitcoin::PublicKey,
    amount: bitcoin::Amount,
    period: AnyRelTime
}

const MIN_PAYOUT: bitcoin::Amount = bitcoin::Amount::from_sat(10000);
impl Annuity {
    #[then]
    fn claim(self, ctx:Context) {
        let amt = ctx.funds();
        // Basically, while there are funds left this contract recurses to itself,
        // until there's only a little bit left over.
        // No need for CPFP since we can spend from the `to` output for CPFP.
        if amt - self.amount > MIN_PAYOUT {
            ctx.template()
                .add_output(self.amount, &self.to, None)?
                .add_output(amt - self.amount, &self, None)?
                .set_sequence(-1, self.period.into())?
                .into()
        } else if amt > 0 {
            ctx.template()
                .add_output(amt, &self.to, None)?
                .set_sequence(-1, self.period.into())?
                .into()
        } else {
            // nothing left to claim
            empty()
        }
    }
}

We could instead “transpose” an annuity into a non-serialized form. This would basically be a big transaction that has N outputs with locktimes on claiming each. However this has a few drawbacks:

  1. Claims are non-serialized, which means that relative timelocks can only last at most 2 years. Therefore only absolute timelocks may be used.

  2. You might want to make it possible for another entity to counterclaim Jimmy’s funds back, perhaps if he also died (talk about bad luck). In the transposed version, you would need to make N proof-of-life challenges v.s. just one1.

  3. You would have to pay more fees all at once (although less fees overall if feerates increase or stay flat).

  4. It’s less extensible – for example, it would be possible to do a lot of cool things with serialization of payouts (e.g., allowing oracles to inflation adjust payout rate).

Splits

Remember our annoying spouse, bad lawyer, etc? Well, instead of giving them a multisig, imagine we use the split function as the end output from our CheckIfDead:

fn split(ctx: Context, cpfp: bitcoin::Address) -> TxTmplIt {
    let dust = Amount::from_sat(600);
    let amt = ctx.funds() - dust;
    let mut ctx.template()
       .add_output(dust, &Compiled::from_address(cpfp, None), None)?
       .add_output(amt*0.5, &from_somewhere::spouse_annuity, None)?
       .add_output(amt * 0.1666, &from_somewhere::kids_annuity[0], None)?
       .add_output(amt*0.1666, &from_somewhere::kids_annuity[1], None)?
       .add_output(amt*0.1666, &from_somewhere::kids_annuity[2], None)?
       .into()
}

This way we don’t rely on any pesky disagreement over what to sign, the funds are split exactly how we like.

Oracles and Lawyers

Lastly, it is possible to bake into these contracts all sorts of conditionallity.

For example, imagine an Annuity that only makes payouts if a University Attendance Validator signs your tuition payment, otherwise you get the coins on your 25th Birthday.

struct Tuition {
    /// keep this key secret from the school
    to: bitcoin::PublicKey,
    enrolled: bitcoin::PublicKey,
    school: bitcoin::PublicKey,
    amount: bitcoin::Amount,
    period: AnyRelTime,
    birthday: AbsTime,
}

const MIN_PAYOUT: bitcoin::Amount = bitcoin::Amount::from_sat(10000);
impl Tuition {
    #[guard]
    fn enrolled(self, ctx: Context) {
        Clause::And(vec![Clause::Key(self.enrolled), Clause::Key(self.to)])
    }
    #[then(guarded_by="[Self::enrolled]")]
    fn claim(self, ctx:Context) {
        let amt = ctx.funds();
        if amt - self.amount > MIN_PAYOUT {
            // send money to school
            ctx.template()
                .add_output(self.amount, &self.enrolled, None)?
                .add_output(amt - self.amount, &self, None)?
                .set_sequence(-1, self.period.into())?
                .into()
        } else if amt > 0 {
            // give the change to child
            ctx.template()
                .add_output(amt, &self.to, None)?
                .set_sequence(-1, self.period.into())?
                .into()
        } else {
            empty()
        }
    }
    #[guard]
    fn spend(self, ctx: Context) {
        Clause::And(vec![self.birthday.into(), Clause::Key(self.to)])
    }
}

The oracle can’t really steal funds here – they can only sign the already agreed on txn and get the tuition payment to the “school” network. And on the specified Birthday, if not used for tuition, the funds go to the child directly.

Where do these live?

In theory what you’d end up doing is attaching these to every coin in you wallet under a dead-man switch.

Ideally, you’d put enough under your main “structured” splits that you’re not moving all to often and then you would have the rest go into less structured stuff. E.g., the college fund coins you might touch less frequently than the coins for general annuity. You can also sequence some things using absolute timelocks, for example.

In an ideal world you would have a wallet agent that is aware of all your UTXOs and your will and testament state and makes sure to regenerate the correct conditions whenever you spend and then store them durably, but that’s a bit futuristic for the time being. With CTV the story is a bit better, as for many designs you could distribute a WASM bundle for your wallet to your family and they could use that to generate all the transactions given an output, without needing to have every presigned transaction saved.

This does demonstrate a relative strength for the account model, it’s much easier to keep all your funds in once account and write globally correct inheritence vault logic around it for all your funds, computed across percentages. No matter the UTXO model covenant, that someone might have multiple UTXOs poses an inherent challenge in doing this kind of stuff properly.

What else?

Well, this is just a small sampling of things you could do. Part of the power of Sapio is that I hope you’re feeling inspired to make your own bespoke inhertience scheme in it! No one size fits all, ever, but perhaps with the power of Sapio available to the world we’ll see a lot more experimentation with what’s possible.


Till next time – Jeremy.

  1. Note this is a case where unrolling can be used, but the contract sizes can blow up kinda quick, so careful programming might be needed or you might need to say that it can only be claimed that Jimmy is dead once or twice before he just gets all the money. Recursive covenants would not nescessarily have this issue. 



Building Vaults on Bitcoin

Day 10: Rubin's Bitcoin Advent Calendar

Welcome to day 10 of my Bitcoin Advent Calendar. You can see an index of all the posts here or subscribe at judica.org/join to get new posts in your inbox

A “Vault” is a general concept for a way of protecting Bitcoin from theft through a cold-storage smart contract. While there is not formal definition of what is and is not a Vault, generally a Vault has more structure around a withdrawal than just a multisig.

One of the earlier references for Vaults was a design whereby every time you request to withdraw from it you can “reset” the request within a time limit. This means that while an attacker might steal your keys, you can “fight” to make it a negative sum game – e.g., they’ll just keep on paying fees to eventually steal an amount less than they paid. This might serve to disincentivize hacking exchanges if hackers are less likely to actually get coins.

Similar Vaults can be built using Sapio, but the logic for them involves unrolling the contract a predefined number of steps. This isn’t bad because if the period of timeout is 1 week then just unrolling 5,200 times gets you one thousand years of hacking disincentive.

The contract for that might look something like this in Sapio (note: I was running behind on this post so I may make modifications to make these examples better later):

struct VaultOne {
    /// Key that will authorize:
    /// 1) Recursing with the vault
    /// 2) Spending from the vault after not moved for a period
    key: bitcoin::PublicKey,
    /// How long should the vault live for
    steps: u32,
}

impl VaultOne {
    /// Checks if steps are remaining
    #[compile_if]
    fn not_out_of_steps(self, ctx: Context) {
        if self.steps == 0 {
            ConditionalCompileType::Never
        } else {
            ConditionalCompileType::NoConstraint
        }
    }

    #[guard]
    fn authorize(self, ctx: Context) {
        Clause::Key(self.key.clone())
    }

    /// Recurses the vault if authorized
    #[then(compile_if = "[Self::not_out_of_steps]", guarded_by = "[Self::authorize]")]
    fn step(self, ctx: Context) {
        let next = VaultOne {
            key: self.key.clone(),
            steps: self.steps - 1,
        };
        let amt = ctx.funds();
        ctx.template()
            .add_output(amt, &next, None)?
            // For Paying fees via CPFP. Note that we should totally definitely
            // get rid of the dust limit for contracts like this, or enable
            // IUTXOS with 0 Value
            .add_output(Amount::from_sat(0), &self.key, None)?
            .into()
    }
    /// Allow spending after a week long delay
    #[guard]
    fn finish(self, ctx: Context) {
        Clause::And(vec![
            Clause::Key(self.key.clone()),
            RelTime::try_from(Duration::from_secs(7 * 24 * 60 * 60))
                .unwrap()
                .into(),
        ])
    }
}
/// Binds the logic to the Contract
impl Contract for VaultOne {
    declare! {then, Self::step}
    declare! {finish, Self::finish}
}

But we can also build much more sophisticated Vaults that do more. Suppose we want to have a vault where once a week you can claim a trickle of bitcoin into a hot wallet, or you can send it back to a cold storage key. This is a “structured liquidity vault” that gives you time-release Bitcoin. Let’s check out some code and talk about it more:

#[derive(Clone)]
struct VaultTwo {
    /// Key just for authorizing steps
    authorize_key: bitcoin::PublicKey,
    amount_per_step: bitcoin::Amount,
    /// Hot wallet key
    hot_key: bitcoin::PublicKey,
    /// Cold wallet key
    cold_key: bitcoin::PublicKey,
    steps: u32,
}

impl VaultTwo {
    #[compile_if]
    fn not_out_of_steps(self, ctx: Context) {
        if self.steps == 0 {
            ConditionalCompileType::Never
        } else {
            ConditionalCompileType::NoConstraint
        }
    }

    #[guard]
    fn authorized(self, ctx: Context) {
        Clause::Key(self.authorize_key.clone())
    }
    #[then(compile_if = "[Self::not_out_of_steps]", guarded_by = "[Self::authorized]")]
    fn step(self, ctx: Context) {
        // Creates a recursive vault with one fewer steps
        let next = VaultTwo {
            steps: self.steps - 1,
            ..self.clone()
        };
        let amt = ctx.funds();
        ctx.template()
            // send to the new vault
            .add_output(amt - self.amount_per_step, &next, None)?
            // withdraw some to hot storage
            .add_output(self.amount_per_step, &self.hot_key, None)?
            // For Paying fees via CPFP. Note that we should totally definitely
            // get rid of the dust limit for contracts like this, or enable
            // IUTXOS with 0 Value
            .add_output(Amount::from_sat(0), &self.authorize_key, None)?
            // restrict that we have to wait a week
            .set_sequence(
                -1,
                RelTime::try_from(Duration::from_secs(7 * 24 * 60 * 60))?.into(),
            )?
            .into()
    }
    /// allow sending the remaining funds into cold storage
    #[then(compile_if = "[Self::not_out_of_steps]", guarded_by = "[Self::authorized]")]
    fn terminate(self, ctx: Context) {
        ctx.template()
            // send the remaining funds to cold storage
            .add_output(self.amount_per_step*self.steps, &self.cold_key, None)?
            // For Paying fees via CPFP. Note that we should totally definitely
            // get rid of the dust limit for contracts like this, or enable
            // IUTXOS with 0 Value
            .add_output(Amount::from_sat(0), &self.authorize_key, None)?
            .into()
    }
}

impl Contract for VaultTwo {
    declare! {then, Self::step, Self::terminate}
}

This type of Vault is particularly interesting for e.g., withdrawing from an exchange business. Imagine a user, Elsa who wants to have a great cold storage system. So Elsa sets up a xpub key and puts it on ice. She then generates a new address, and requests that the exchange let the funds go to it. Later that month, Elsa wants to buy a coffee with her Bitcoin so she has to thaw out her cold storage to spend (maybe using a offline PSBT signing), and transfer the funds to her destination or to a hot wallet if she wants a bit of extra pocket money. Instead suppose Elsa sets up a timerelease vault. Then, she can set up her cold vault and automatically be able to claim 1 Bitcoin a month out of it, or if she notices some coins missing from her hot wallet redirect the funds solely under her ice castle.

This has many benefits for an average user. One is that you can invest in your cold storage of keys once in your life and only have to access it in unexpected circumstance. This means that: users might elect to use something more secure/inconvenient to access (e.g. strongly geo-sharded); that they won’t reveal access patterns by visiting their key storage facility; and that they don’t need to expose themselves to recurring fat-finger1 risk.

Getting a little more advanced

What are some other things we might want to do in a vault? Let’s do a quickfire – we won’t code these here, but you’ll see examples of these techniques in posts to come:

Send a percentage, not a fixed amount

Let the contract know the intended amount, and then compute the withdrawals as percentages in the program.

Non-Key Destinations

In the examples above, we use keys for hot wallet, cold wallet, and authorizations.

However, we could very well use other programs! For example, imagine a time-release vault that goes into a anti-theft locker.

Change Hot Wallet Every Step

This one is pretty simple – if you have N steps just provide a list of N different destinations and use the i-th one as you go!

Topping up:

There are advanced techniques that can be used to allow depositing into a vault after it has been created (i.e., topping up), but that’s too advanced to go into detail today. For those inclined, a small hint: make the “top up” vault consume an output from the previous vault, CTV commits to the script so you can use a salted P2SH out.

Even more advanced

What if we want to ensure that after a withdraw funds are re-inserted into the Vault?

We’ll ditch the recursion (for now), and just look at some basic logic. Imagine a coin is held by a cold storage key, and we want to use Sapio to generate a transaction that withdraws funds to an address and sends the rest back into cold storage.

struct VaultThree {
    key: bitcoin::PublicKey,
}

/// Special struct for passing arguments to a created contract
enum Withdrawal {
    Send {
        addr: bitcoin::Address,
        amount: bitcoin::Amount,
        fees: bitcoin::Amount,
    },
    Nothing,
}
/// required...
impl Default for Withdrawal {
    fn default() -> Self {
        Withdrawal::Nothing
    }
}
impl StatefulArgumentsTrait for Withdrawal {}

/// helper for rust type system issue
fn default_coerce(
    k: <VaultThree as Contract>::StatefulArguments,
) -> Result<Withdrawal, CompilationError> {
    Ok(k)
}

impl VaultThree {
    #[guard]
    fn signed(self, ctx: Context) {
        Clause::Key(self.key.clone())
    }
    #[continuation(guarded_by = "[Self::signed]", coerce_args = "default_coerce")]
    fn withdraw(self, ctx: Context, request: Withdrawal) {
        if let Withdrawal::Send { amount, fees, addr } = request {
            let amt = ctx.funds();
            ctx.template()
                // send the rest recursively to this contract
                .add_output(amt - amount - fees, self, None)?
                // process the withdrawal
                .add_output(amount, &Compiled::from_address(addr, None), None)?
                // mark fees as spent
                .spend_amount(fees)?
                .into()
        } else {
            empty()
        }
    }
}
impl Contract for VaultThree {
    declare! {updatable<Withdrawal>, Self::withdraw}
}

Now we’ve seen how updatable continuation clauses can be used to dynamically pass arguments to a Sapio contract and let the module figure out what the next transactions should be, managing recursive and non-enumerated state transitions (albeit with a trust model).


That’s probably enough for today, before I make your head explode. We’ll see more examples soon!

  1. Sending the wrong amount because you click the wrong key with your too-large hands. 



Sapio Primer

Day 9: Rubin's Bitcoin Advent Calendar

Welcome to day 9 of my Bitcoin Advent Calendar. You can see an index of all the posts here or subscribe at judica.org/join to get new posts in your inbox

We’re through the basics sections of the Advent calendar ow! Time for some more… specific content on the bleeding edge!

This post is your introduction to the world of Sapio. Sapio is the programming framework I’ve been developing for Bitcoin Smart Contracts. There’s a ton of material on the website, so this post is going to be a bit high-level and then you should jump into the docs after to learn more.

What the heck is Sapio?

Sapio is a tool that helps you design and use Bitcoin smart contracts based on covenants (like CTV) as well as manage potentially recursive state transitions at terminal states.

That’s a mouthful and a half… let’s break it down with a very basic vault deposit example.

Suppose I have 10 bitcoin sitting in my normal wallet. I want to deposit it to an exchange. I go to my exchange and request an address to deposit to. The exchange wants their coins to be in a special cold storage whereby any move from cold storage has to “mature” for 10 days since it was claimed before it’s spendable as a hot-spend, otherwise it stays in cold. The hot wallet has logic such that any unused funds after it transacts, goes back into the cold-storage contract. We saw a contract like this in the day 7 post.

The exchange can use Sapio to generate an address that expects 10 coins and encodes this cold-to-hot logic without requiring the cold keys be online! Better than that, I don’t even have to contact the exchange for the address. The exchange can distribute a codesigned Sapio WASM applet that runs on my own machine locally. I download the applet into my Sapio Studio GUI and that generates the exchange deposit UX form for the contract that I (or my wallet) automatically fills out and then generates a proper address/spending transaction.

Upon receipt of the deposit information, (which can in certain circumstances be completely on-chain in the txn, so no need for a separate communication channel), the exchange can us the WASM to generate an identical deposit program to verify the user isn’t cheating somehow. Bada-bing-bada-boom!

We’ll see in close detail examples like this coming in the following posts, but to sum up, Sapio helped us with the following:

  1. Authoring a Smart Contract Application for a cold storage deposit solution
  2. Distributing it as a deterministic executable with a GUI User using it to make a deposit
  3. Receiving funds as a depositee directly into a smart contract
  4. Generating withdrawal transactions out of the vault
  5. Putting the remaining funds back into the cold storage

This is not a hypothetical, all of these components exist and are usable today! Where there is an asterisk is that BIP-119 CTV does not yet exist, and so for apps like this the exchange would have to run some kind of signing server you connect to. This works, but is a worse trust model. For some applications, you don’t need CTV if you can get all of a contract’s parties to run their own oracles. Therefore you can still accomplish a lot without a worse trust model with what’s there today!


Over the remaining posts we’ll go into great detail on different applications built in Sapio, but for now you can skim through learn.sapio-lang.org to get started playing around with your own designs.



Contracting Primitives and Upgrades to Bitcoin

Day 8: Rubin's Bitcoin Advent Calendar

Welcome to day 8 of my Bitcoin Advent Calendar. You can see an index of all the posts here or subscribe at judica.org/join to get new posts in your inbox

In this post we’ll rapid fire roll through a bunch of different smart contract primitives, existing and proposed. For a more thorough reading, links will be provided.

BIP-119 CTV CheckTemplateVerify

CTV is a general purpose smart contract opcode with full enumeration, no dynamic state, no recursion, and primarily works through validation.

Essentially, CTV only lets you select a specific next transaction that can occur. Consensus just checks a transaction hash against a CTV hash.

Although this seems to be limited functionality, it can be used with a template metaprogramming system such as Sapio to create sophisticated programs.

The limited functionality isn’t a bug, it’s a feature. CTV was designed to be quick and easy to garner technical consensus with the entire Bitcoin community as a simple and safe covenant without some of the issues more sophisticated covenant systems might have. However, since its launch there’s been more interest developing for more flexible covenants, which may take much longer to deploy and deliver meaningful benefits to users.

CTV is also designed to work well with other opcodes that might be added (such as CSFS, OP_AMOUNT, and OP_CAT), so it does not become irrelevant should more features be added, it simply gets better.

CTV is currently a decently reviewed BIP pending more support from the community for inclusion (see social signals).

Disclosure: I’m the author/advocate of BIP-119.

For more:

  1. Optech
  2. utxos.org
  3. Templates, Eltoo, and Covenants, Oh My!
  4. Shinobi’s Covenant Concerns

BIP-118 APO AnyPrevout

AnyPrevout is a culmination of research for the Lightning Network (dating back to the original whitepaper) for creating a type of “rebindable” bitcoin transaction that dramatically simplifies the protocols for LN by getting rid of a lot of the complexities around storing state and closing channels unilaterally. AnyPrevout helps make Decker Channels possible (or, confusingly, sometimes called Eltoo not to be confused with L2).

The basics of how Anyprevout works is that it changes what parts a signature signs to exclude the specifics of the coin being spent. This has some drawbacks in terms of changing current invariants true of signatures, but it is generally safe.

APO can also be used to implement something similar to CTV, but there are sufficient differences between the two (including with respect to efficiency) such that the proposals aren’t competitive.

APO is currently a decently reviewed BIP pending more support from the community for inclusion. The largest blocker for wider support is a concrete functional prototype of LN with Decker Channels, which would drive surety that APO has “product market fit”. Certain developers believe that additional proposals, like SIGHASH_BUNDLE, would be required to make it fully functional.

  1. My BIP-118 Review
  2. The BIP
  3. Eltoo/Decker Channels
  4. Templates, Eltoo, and Covenants, Oh My!

TLUV TapLeafUpdateVerify

TLUV is a proposed general purpose smart contract opcode that is open ended, has dynamic local state, recursive, and is somewhat computational.

Essentially, TLUV lets you modify a Taproot Output being spent by changing the toplevel key and script paths being spent. TLUV only can read and affect a single input/output pair; the other outputs are unaffected. The functionality of TLUV is very “specific” to the implementation details of Taproot, as it must correctly modify the data structures behind it. For Example, you could have a Taproot output with 10 coins and a script like:

[{"amt": 10,
  "key": "multi(A,B,C)",
  "scripts": ["signed(A) with up to 2 coins",
              "signed(B) with up to 5 coins",
              "signed(C) with up to 3 coins"]
 }
]

and TLUV would enable you to transition to the following outputs:

[{"amt": 9,
  "key": "multi(A,B,C)",
  "scripts": ["signed(A) with up to 1 coins",
              "signed(B) with up to 5 coins",
              "signed(C) with up to 3 coins"]
 },
 {"amt": 0.25,
  "address": "someone paid by A"
 },
 {"amt": 0.75,
  "address": "someone else paid by A"
 }
]

or even a full exit:

[{"amt": 9,
  "key": "multi(B,C)",
  "scripts": ["signed(B) with up to 5 coins",
              "signed(C) with up to 3 coins"]
 },
 {"amt": 0.25,
  "address": "someone paid by A"
 },
 {"amt": 0.75,
  "address": "someone else paid by A"
 }
 {"amt": 1,
  "address": "A's key (exiting funds)"
 }
]

There are some potential footguns around modifying the top level key, as it needs to be a valid Taproot key after tweaking.

TLUV as designed requires some form of OP_AMOUNT to enable the recursive shared UTXO shown above.

There is no current concrete proposal (e.g. BIP) for TLUV, it’s open ended research presently.

  1. Optech
  2. Mailing List
  3. My Mailing List Response

CSFS CheckSigFromStack

CheckSigFromStack, or CheckDataSig (note for experts: usually shorthand for the verification-only version as there’s little point to check that something wasn’t signed by someone) is an opcode which checks an arbitrary message was signed by a key. Normally, when a Bitcoin script checks a signature, the message must be a hash of the current transaction computed in accordance with the requested transaction hashing program.

CSFS has a couple “basic” applications that could be useful. For example, one might write a program where either a key K signs a transaction normally, or it signs a key which then signs a transaction. This allows the holder of a coin to “delegate” the ownership of a coin to another key without moving the coin.

CSFS already exists in Bitcoin in some sense: using Lamport Signatures it is currently possible to check a signature over 5 bytes of data. This is not terribly useful, but one could imagine certain uses for e.g. delegating to the specified signer the duration of a timelock.

CSFS really shines when it is combined with other opcodes. For example, CSFS plus CTV can enable something similar to AnyPrevout and Eltoo. CSFS plus CAT enables fully generic covenants in segwit V0, but not in Taproot (without some sort of OP_TWEAK as well). This is best left to reading some additional materials on the subject, but imagine if I first check the transaction signature normally, and then I check it on the stack against the transaction itself pushed onto the stack, which I used CAT to assemble from pieces. This would let me run programmatic checks on all the components of a script).

While there is not currently a proposal for CSFS, it’s not terribly controversial and the design would be relatively straightforward.

  1. BIP Suggestions
  2. Templates, Eltoo, and Covenants, Oh My!
  3. CSFS from Math (5 bytes)

OP_AMOUNT

OP_AMOUNT was proposed in 2017 by Johnson Lau (the earliest citation I could dig up) through a scripting extension called PUSHTXDATA that allows arbitrary data to be pushed on the stack. As a standalone extensions, getting the amount spent/created on the stack (whether as a push opcode or an opcode with verify semantics) would allow for smart contracts to either limit the amount being spent or switch behavior based on the amount.

For example, with TLUV a Taproot branch can have an individual balance that can be updated at the discretion of the branch holder. Suppose I had a script tree that said Alice has 1 bitcoin and Bob has 20 Bitcoin. When Alice is spending, the script would require that the corresponding output (e.g., input 0 output 0) be reduced by at most 1 Bitcoin, and the output should be updated to change Alice’s script to have 1-(spent amount) in the next instance.

As another example, CTV could be used with an OP_AMOUNT to enable a ultra high security vault if the amount sent is greater than 1 Bitcoin and a lower security vault if it is less than 1 Bitcoin.

There’s no current concrete proposal for OP_AMOUNT. Difficulties in adding it remain because Bitcoin Scripts deal in 32-bit math and amounts are 64-bit values (51 bits precisely).

  1. OP_PUSHTXDATA
  2. OP_IN_OUT_AMOUNT

SIGHASH_BUNDLE

Sighash Bundle is a part of an effort to make “Sighash Flags” more general. Sighash Flags are a mini “programming language” to describe what parts of a transaction a signer wants to sign for a transaction. Bundles in particular allow a signer to select a range of inputs and outputs in a way that the bundle description can be rebound to allow some form of post-hoc aggregation of transactions.

It’s primarily proposed to help make Decker Channels work with a sub-protocol called “layered commitments”. It’s possible for inclusion, but it has the same issue as AnyPrevout, we need to see an end-to-end implementation of LN using it to be sure that the technology is solving the problem it is designed to.

There’s no concrete implementation proposed yet.

  1. Mailing List Post

Transaction Sponsors

Transaction Sponsors is another proposal by yours truly.

The basic concept of Transaction Sponsors is to allow expressing logic that Transaction B should only be in a block if Transaction A is also in the block. In particular, the proposal says that a transaction with a 0 value output with script OP_VER <txids> would make the transaction valid only if the txids were also in the block.

The ability to express such a dependency has implications for designing novel smart contracts based on these dependencies, but this is not the focus of the sponsors proposal with respect to mempool policy.

Instead, the Sponsors proposal is to use the ability to express additional dependencies as a way of dynamically adding fees to transactions in the mempool without relying on CPFP or RBF. This primitive is particularly helpful for driving progress of smart contracts based on CTV or Decker Channels without requiring any sort of transaction malleability.

There is currently an implementation and Draft BIP of Sponsors, but the BIP has not been advanced for inclusion yet.

  1. Mailing List Post
  2. Post about difficulties of paying fees

OP_CAT (Or SHASTREAM)

OP_CAT is “deceptively simple”. All it enables is the ability to take an argument “hello “ and an argument “world” and join them together into “hello world”.

CAT was originally a part of Bitcoin, but it had some implementation flaws and was removed by Satoshi in an emergency patch early on in Bitcoin’s history.

Although it is simple, it turns out that the ability to join bytestrings together adds a remarkable variety of functionality to Bitcoin, including things like quantum proof signatures and covenants. There are a couple different variants of CAT that would be possible and have different tradeoffs, but largely CAT and friends are not controversial in their design. What does make CAT controversial is that because it has the propensity to introduce so many surprising behaviors in Bitcoin, we might prefer to better understand the impacts of users being able to author such advanced smart contracts.

  1. Quantum Proof Bitcoin
  2. Poelstra CAT Blog I
  3. Poelstra CAT Blog II

OP_TWEAK / ECMUL

These two opcodes enable manipulating an elliptic curve point on the stack for use in a covenant or to compute a particular private key.

There’s no concrete proposal for this pair, but the implementations are basically specified already by the requirements of the secp256k1 curve.

Adaptor Signatures

Adaptor Signatures are a technique that can be used with Schnorr signature and do not require any additional forks to Bitcoin.

The basics of an Adaptor signature is that a party (or group of parties) can create an object which either takes in a signature and reveals a secret or takes a secret and reveals a signature.

These adaptors can be used in place of hash preimage locks for a variety of use cases.

  1. Optech

Delegation / Graftroot

Delegation is a general concept whereby you can take a script and instead of signing a transaction, you sign another script that can then execute. For example, imagine if there is a coin that requires a signature of Alice and Bob to spend. Suppose Alice wants to go offline, but Bob might want to transact. Alice could sign a script requiring a signature from Carol that “substitutes” for Alice’s signature in the future.

Delegation is currently possible in a somewhat roundabout way through coin-delegation. This is where the other script fragment must be represented by a UTXO.

Graftroot is an extension to Taproot which would let the top-level key-path signers sign delegating scripts, but not other tapscript branches. There are also several confusingly named extensions and alternatives in the links below.

Delegation could also be combined with Anyprevout so that delegation authorizations are bound to a specific coin or to a specific script. CSFS enables a basic kind of delegation as well. This would enable, with Graftroot, a version of Taproot where the trees are constructed interactively and do not have any lookup cost.

Other than what’s presently possible, there are no concrete proposals for adding new delegation features to Bitcoin.

  1. Coin Delegation
  2. Graftroot
  3. Entroot
  4. G’Root (not graftroot)

BIP-300 DriveChains

Drive chains are a highly application specific type of recursive covenant that is designed to help sidechains operate by tracking sidechain deposits and withdrawals with an on-chain miner driven voting system.

The sidechains would have the ability to run arbitrary smart contracts (at the choice of the sidechain operators). Miners the upvote, downvote, or abstain from voting on withdrawals through a special output type.

One of the main downsides to this approach is that the BIP-300 proposal as written requires the addition of new global state databases, rather than local state contained within the covenant transaction itself.

Overall Drivechains are relatively controversial among the community; with lots of interest from the community and also some outspoken critics because of the changes to Bitcoin’s incentive stability for consensus. It’s included here for completeness and by request of what topics to cover in today’s post.

It’s the author’s opinion that while the concept of Drivechains is useful, the implementation of it does not need to be as transactions inside of the existing block space and instead could be tracked via a separate commitment (like Segwit). This could happen if Drivechains were implemented via a more generliazed covenant rather than application specific.

  1. BIP-300
  2. Drivechains

Elements Opcodes

Elements is Blockstream’s Bitcoin fork for their Liquid Sidechain. Elements has planned to add a broad variety of opcodes that can help to accomplish a variety of tasks, including many of the above, in addition to their existing extensions.

  1. Existing Opcodes
  2. Upgrade for Taproot

Breathe! That was a lot! There’s still other stuff that’s floating around, but these are the top-of-mind primitives in my head for bringing more programmability to Bitcoin.

Future posts will zero in on what’s possible with BIP-119 and Sapio and help make the case that it is a fantastic next step in Bitcoin’s Upgrade journey by showing (not telling) how one little limited opcode opens up an entire world of possibilities, as well as laying out a – dare I say – personal roadmap for the inclusion and development of other upgrades as a coherent narrative for Bitcoin.


© 2011-2021 Jeremy Rubin. All rights reserved.