Jeremy Rubin's Blog

Here you'll find an assorted mix of content from yours truly. I post about a lot of things, but primarily Bitcoin.

categories: Bitcoin, Shenzhen Journey.


Payment Channels in a CTV+Sapio World

Day 14: Rubin's Bitcoin Advent Calendar

Welcome to day 14 of my Bitcoin Advent Calendar. You can see an index of all the posts here or subscribe at judica.org/join to get new posts in your inbox

Lightning Lightning Lightning

Everybody loves Lightning. I love Lightining, you love Lightning. We love everyone who works on Lightning. Heck, even Chainalysis loves Lightning these days :(…

We all love Lightning.

But what if I told you we could love Lightning even more? Crazy, right?

With CTV + Sapio we can improve on Lightning is some pretty cool ways you may not have heard too much about before. Buckle up, we’re in for another doozy of a post.

Let a thousand channels bloom

The main thing we’re going to talk about in this post is the opening and closing of channels. There are some other things that CTV/Sapio can do that are a bit more niche to talk about1, but there will always be future posts.

How do we open channels today?

Let’s say I want to open a channel up with you. I shoot you a text on signal or something and say “hey what’s up, happy holidays friend. I would like to open a payment channel with you”. You say back, “Tis the season! Let’s do it, my Tor Hidden Service address is ABCXYZ”. Then I connect to your node from my computer and then I say I want to open a channel with you for 500,000 sats (at writing in 2021 this was $250 US Dollars, not $250 Million Dollars). Then, you might authorize opening up the channel with me, or your node might just roll the dice and do it without your permission (IDK how the nodes actually work, depends on your client, and maybe in the future some reputation thingy).

So now we have agreed to create a channel.

Now, I ask you for a key to use in the channel and you send it to me. Then, I create an unsigned transaction F that is going to create and fund our channel. The channel is in Output C. I send you F and C. Then, I ask you to pre-sign a transaction spending from C that doesn’t yet exist, but would refund me and give you nothing in the event you go offline. This is basically just using the channel like it exists already for a payment 0 paying me. After I get those sweet sweet signatures from you, then I send you the signatures as well in case you want to close things out like normal.

Houston, we have a channel.

Now we can revoke old states and stuff and sign new states and all that fancy channel HTLC routing jazz. We don’t really need to know how a lot of that works down in the details so don’t ask.

Something a little more nifty, perhaps?

Technically I presented you how single funded channels work, but you can also dual fund where we both contribute some funds. It’s relatively new feature to land and was a lot of work… Dual funded channels are important because when I opened the channel to you I had all the sats and I couldn’t receive any Bitcoin. Dual funded channels means you can immediately send both directions.

What can we do with CTV?

With CTV, the single funded channel opening story is a bit simpler. I ask you if you want to open a channel, you say “sure!” (maybe I even look up your key from a Web-of-Trust system), and send me a key. I then use Sapio to compile a channel for 500k sats to our keys, I send Bitcoin to it. The channel is created. I send you the Outpoint + the arguments to the channel, either through email, connecting to your node, or pigeon with a thumbdrive, and later you verify that I paid to the channel for our keys that Sapio output by running the compiler with the same arguments (500k sats to our keys).

This is called a non-interactive channel open. Why’s that? Beyond having to do some basics (e.g., I have to know a key for you, which could be on a public Web-of-Trust), there is no step in the flow that requires any back-and-forth negotiation to create the channel. I just create it unilaterally, and then I could tell you about it a year later. You’d be able to verify it fine!

For dual-funded channels, I send you a transaction you can pay into to finish opening it and I can go offline. Once opened, the channel works for us both recovering our funds.

sounds niche

It kinda is. It’s an esoteric nerdy property. But I promise you it’s really cool! Let’s look at some examples:

Cafe Latte Anyone?

Let’s say that I go to a cafe I’ve never been to and there is a QR code posted on the wall. I then go about my business, ordering a 10,000 sat breakfast combo. To pay, I scan the QR-code, and then it has a XPUB for Non Interactive Channels on it.

I can then plug in that XPUB into my Sapio Channel Creator and create a channel with a first payment of 10k sats and a total balance of 100k sats. I show a QR code on my phone to the barista, who scans it, getting the details of the channel I made. Barista says looks good, acknowledging both the payment and the channel open. The details get backed up to The Cloud.

But just then something happens: a masked figure comes in with a gun and tells the barista, “GIVE ME ALL YOUR SATOSHIS”. A child begins to cry, their parent covering their mouth with their hand. The bad guy barks, “GIVE ME ALL YOUR SATOSHIS… and no one gets hurt,” tapping the muzzle of the gun on the countertop. The barista smirks and snarls, “stupid thief, surely you’ve been reading the post on non-interactive lightning channels on Rubin’s Bitcoin Advent Calendar.” The robber adjusts the straps on their mask for some relief from the ear irritation. “If you had been reading it, you would know that I don’t need to have a key online in order for someone to create a channel with me! I just need the XPUB to verify they are made correctly. This is not those old-school channels. I have no ability to spend. We keep our keys colder than our cold brew.” The robbers shoulders sag and they mutter, “fine, in that case, I’ll have a medium cold brew coffee, one sugar with a splash of oat milk. And that big chocolate chip cookie”.

That’s right. Because our cafe used non-interactive channels, they didn’t have to have a key online to create a channel with me! They just needed durable storage for the channel definition.

And when I go to spend a bit extra for a bottle of Topo Chico™ later, they still don’t need to be online, I can start making payments without them counter-signing2.

Where did my corn come from?

How did I get the bitcoin for the channel I’m opening? Usually this is an assumption for Lightning (you have Bitcoin!), but in this case it’s central to the plot here. You probably got them from an exchange, mining, or something else.

This means that in order to open a channel to someone, I need to do two transactions:

  1. Get some money
  2. Make the channel

It’s possible, if I had a really legit hip exchange, they’d let me directly open a channel by offering me a transaction unsigned with the channel output C that I can presign with you! But then they can’t really batch payments (otherwise one user going offline can be a DoS attack on the batch payout) and they can also get DoS’d unbatched since we can “lock up” a coin while we run the protocol.

If instead, we had CTV we could just generate an address for the channel we wanted and request the exchange pay to it the appropriate amount of coin. The exchange could pay the channel address however they want, and we’d be able to use it right away.

However they want?

Yes. Let’s look at some options:

  1. A normal transaction – Works great.
  2. A batch transaction – No Problemo.
  3. A Congestion Control Tree – Even that!

What was that last one? You read it right, a channel can be created in a Congestion Control tree, and be immediately usable!

How’s this work? Well, because you can fully verify you’d receive a payment in a congestion control tree, you can likewise fully verify that your channel will be created.

This is big. This means that you can just directly request a channel from a third party without even telling them that you’re making a channel!

And this technique – channels in congestion control tree – generalizes beautifully. It means you could create as many immediately usable channels as you like and lazily fully open them over their lifetime whenever blockspace is affordable.

I Lied (a little)

If the exchange doesn’t follow your payment instructions to the T, e.g. if they split it into two UTXOs then it won’t work. Exchanges should probably not do anything other than what you asked them to do (this should be something to ensure in the exchanges terms of service…).

Come on in the water’s warm?

This concept also composes nicely with the Payment Pools we saw yesterday. Imagine you embed channels as the terminal outputs after a full-ejection from the pool. Then, what you can do is have the N-of-N agree to an on-chain state update that respects (or preserves) any channel updates before you switch. Embedding the channels inside means that Payment Pools would only need to do on-chain transactions when they need to make an external payment or re-configure liquidity among participants.

For example, imagine a pool with Alice, Bob, Carol, and Dave each having one coin in a channel. We’ll do some channel updates, and then reconfigure.

Start:
Pool(Channel([A, 1], [B, 1]), Channel([C, 1], [D, 1]))

Channel Update (off-chain):
Pool(Channel([A, 0.4], [B, 1.6]), Channel([C, 1], [D, 1]))

Channel Update (off-chain):
Pool(Channel([A, 0.4], [B, 1.6]), Channel([C, 1.3], [D, 0.7]))

Pool Reconfigure (on-chainl swap channel partners):
Pool(Channel([A, 0.4], [D, 0.7]), Channel([C, 1.3], [B, 1.6]))

Pool Reconfigure (on-chain; add Eve/Bob Channel):
Pool(Channel([A, 0.4], [D, 0.7]), Channel([C, 1.3], [B, 0.6]), Channel([E, 0.5], [B, 0.5]))

Pretty neat, right?

This is particularly a big win for Scalability and Privacy, since we’re now containing tons of activity within a single UTXO, and even within that UTXO most of the information doesn’t need to be known to all participants.


I’m not going to show you all of these integrations directly (Congestion Control, Pools, etc), because you gotta cut an article somewhere. But we do have enough…

Time to Code

OK enough ‘how it works’ and ‘what it can do’. Let’s get cracking on a basic channel implementation so you know I’m not bullshitting you3.

First, let’s define the basic information we’ll need:

/// Information for each Participant
struct Participant {
    /// signing key
    key: PublicKey,
    /// amount of funds
    amount: AmountF64,
}

/// A Channel can be either in an Open or Closing state.
enum State {
    Open,
    Closing
}

/// Channel definition.
struct Channel {
    /// If it is opening or closing
    state: State,
    /// Each participant's balances
    parties: [Participant; 2],
    /// Amount of time transactions must be broadcast within
    timeout: AnyRelTimeLock,
}

Pretty straightforward.

Now, let’s define the API:

impl Contract for Channel {
    declare!{then, Self::finish_close, Self::begin_close}
    declare!{updatable<Update>, Self::update} 
}

Next, we’ll define the being_close logic. Essentially all it’s going to do is, if we’re in the Open state allow transitioning the pool to the Closing state.

impl Channel {
    #[compile_if]
    fn if_open(self, ctx: Context) {
        if let State::Open = self.state {
            ConditionalCompileType::Required
        } else {
            ConditionalCompileType::Never
        }
    }

    #[then(compile_if = "[Self::if_open]")]
    fn begin_close(self, ctx: Context) {
        // copy the channel data and change to closing state
        // begin_close can happen at any time
        let mut close = self.clone();
        close.state = State::Closing;
        ctx.template()
            .add_output(Amount::from(self.parties[0].amount) +
                        Amount::from(self.parties[1].amount),
                        &close, None)?
            .into()
    }
}

Next we’ll define the logic for the Closing state. Essentially, if the state as been in Closing and the timeout expires, then we allow a transaction to return the funds to the initial state. We’ll only add an output for a participant if they have any money!

impl Channel {
    #[compile_if]
    fn if_closing(self, ctx: Context) {
        if let State::Closing = self.state {
            ConditionalCompileType::Required
        } else {
            ConditionalCompileType::Never
        }
    }

    #[then(compile_if = "[Self::if_closing]")]
    fn finish_close(self, ctx: Context) {
        // only allow finish_close after waiting for timelock
        let mut tmpl = ctx.template().set_sequence(-1, self.timelock)?;
        // add party 0 if they have funds
        if Amount::from(self.parties[0].amount).as_sat() != 0 {
            tmpl = tmpl.add_output(self.parties[0].amount.into(), &self.parties[0].key, None)?;
        }
        // add party 1 if they have funds
        if Amount::from(self.parties[1].amount).as_sat() != 0 {
            tmpl = tmpl.add_output(self.parties[1].amount.into(), &self.parties[1].key, None)?;
        }
        tmpl.into()
    }
}

Almost lastly, we’ll add the updating logic. The updating logic has to be used in a very particular way in this contract, but it’s pretty basic by itself!

// updating a channel
enum Update {
    // nothing to do!
    None,
    // An update that can later 'burned'
    Revokable(Revokable),
    // An update that is formed to terminate a channel
    Cooperate([Participants; 2])
}

impl Channel {
    #[guard]
    fn both_signed(self, ctx: Context) {
        Clause::And(vec![Clause::Key(self.parties[0].key),
                         Clause::Key(self.parties[1].key)])
    }

    #[continuation(guarded_by = "[Self::both_signed]")]
    fn update(self, ctx: Context, u: Update) {
        match u {
            // don't do anything
            Update::None => empty(),
            // send funds to the revokable contract
            Update::Revokable(r) => {
                // note -- technically we only need to sign revokables where
                // state == State::Closing, but we do both for efficiency
                ctx.template()
                    .add_output(Amount::from(self.parties[0].amount) + 
                                Amount::from(self.parties[1].amount), &r, None)?
                    .into()
            },
            // Terminate the channel into two payouts.
            Update::Cooperate(c) => {
                ctx.template()
                   .add_output(c[0].amount.into(), &c[0].key, None)?
                   .add_output(c[1].amount.into(), &c[1].key, None)?
                   .into()

            }
        }
    }
}

Now to finish we need to define some sort of thing for Revokable. Revokables are used to update a channel from one set of balances to another. This will depend on your payment channel implementation. I’ve defined a basic one below, but this could be anything you like.

Essentially, a Revokable is an offer from party A to party B to close the channel such that B can later provably “reject” the offer. If B uses a rejected offer, A can take the entire balance of the channel.

How to use this to update a channel? To start, all parties agree on the new balances with a timeout.

Next, party one gets a hash H(V) from party two that party two knows V and party one does not. Party one then creates a Revokable with from_idx = 0, the updated balances, timelock, and hash H(V). They feed the update arguments to Channel::update and sign the resulting transaction, sending the signed transaction to party two. In particular in non-interactive channels, party one only has to sign revokable updates at the branch where state == State::Closing, but it’s better for cases where your counterparty might not be malicious and just offline if you sign updates on both Open and Closing. Just signing on Open would be insecure.

Then, we repeat this with roles reversed with one generating a hash and two signing transactions.

Lastly, both reveals the hash preimage (V to H(V)) from any prior round to revoke the state from their counterparty.

If either party ever broadcasts the Revokable that they received by signing the other half of the Channel::update after revealing their Hash preimage, the other party can take all the funds in the channel.

Kinda a bit tough to understand, but you don’t really need to get it, you can embed whatever protocol like this inside that you want.

struct Revokable {
    // updated balances
    parties: [Participant; 2],
    // preimage from the other party
    hash: Hash,
    // how long the other party has to revoke
    timelock: AnyRelTimeLock,
    // who is this update from
    from_idx: u8,
}

impl Contract for Revokable {
    declare!{then, Self::finish}
    declare!{finish, Self::revoked}
}

impl Revokable {
    /// after waiting for the timeout, close the balances out at the appropriate values.
    #[then]
    fn finish(self, ctx: Context) {
        let mut tmpl = ctx.template().set_sequence(-1, self.timelock)?;
        if Amount::from(self.parties[0].amount).as_sat() != 0 {
            tmpl = tmpl.add_output(self.parties[0].amount.into(), &self.parties[0].key, None)?;
        }
        if Amount::from(self.parties[1].amount).as_sat() != 0 {
            tmpl = tmpl.add_output(self.parties[1].amount.into(), &self.parties[1].key, None)?;
        }
        tmpl.into()
    }

    /// if this was revoked by the other party
    /// we can sweep all the funds
    #[guard]
    fn revoked(self, ctx: Context) {
        Clause::And(vec![
            Clause::Sha256(self.hash),
            Clause::Key(self.parties[self.from_idx])])
    }
}

And now some closing remarks:

CTV Required?

You don’t need CTV for these channel specs to work, but you do need CTV for the channels to be non-interactive. Without CTV you just use a multi-sig oracle of both parties, and the contracts come out logically similar to an existing lightning channel. Does that mean we’re going to enter…

The Era of Sapio Lightning?

It’s probably going to be a while/never before this actually becomes a “Lightning” standard thing, even if you could use this with self-hosted oracles today, although perhaps one day it could be!

However, it’s possible! One path towards that would be if, perhaps, Sapio gets used to help define the “spec” that all lightning protocols should implement. Then it’d be theoretically possible to use Sapio for a channel implementation! Or maybe Sapio becomes a “plugin engine” for negotiating channels and updates can just be shipping some WASM.

What didn’t make the cut?

Some ideas to mention, but not fully flesh out (yet?):

Eltoo

So, so very much. To start CTV+CSFS can do something like Eltoo, no need for AnyPrevout. Very neat! If we had some Eltoo primitive available, I could show you revocation-free channels.

Embedded Sapio States

Instead of making the channel state a boring “pay X to 0, pay Y to 1” resolution, we can actually embed all sorts of contracts inside of channels.

E.g., imagine if you have a channel whereby if you contested close it your counterparty’s funds (who is offline conceivably) go to a cold-storage vault.

Or imagine if you had some sort of oracle resolved synthetic bitcoin settled derivative contract, like a DLC, embedded inside. You could then use this to HFT your synths!

Or what if there were some new-fangled token protocol that lived inside state transition to state transition, and you could update you and your counterparty’s stake into those?

You can really put anything you want. We’ll see in a couple days how you can define a Channel Plugin Interface so that you can dynamically link a logic module into a contract, rather than compiling it in.

Embedded Channels

We saw a little bit of embedded channels. Channels embedded in congestion control, or in payment pools. But the concept can be a lot more diverse. Remember our Vaults and inheritence schemes? We could make the hot-wallet payouts from those go directly into Channels with some channel operator hub. Or what about making channels directly out of coinjoins? Not having to pre-sign everything really helps. Don’t sleep on this.

Embedded Channel Creation Args

We said earlier that channel creation required some sort of email. But it’s also sometimes possible to embed the channel metadata into e.g. an op_return on the channel creation. Perhaps as an IPFS hash or something. In this case, you would just need to scan over txs, download the relevant data, and then attempt plugging it into WASM (heck – the WASM could just receive the txn in question and do all the heavy lifting). If the WASM spits out a matching output/channel address, you now have a channel you can detect automatically. This doesn’t have to be bad for privacy if the data is encrypted somehow!

How will this impact the world?

Non interactive channel creation is going to, for many users, dramatically decrease the cost of channel opening. Firstly you can defer paying fees when you open many channels (big news)! In fact, if the channel is long lived enough, you may never pay fees if someone else does first! That incentive to wait is called backpressure. It’s also going to “cut through” a lot of cases (e.g., exchange withdraw, move from cold storage, etc) that would otherwise require 2 transactions. And channels in Payment Pools have big opportunities to leverage cooperative actions/updates to dramatically reduce chain load in the happy-case.

This is a gigantic boon not just for scalability, but also for privacy. The less that happens on chain the better!

I think it’s also likely that with non-interactive channels, one might always (as was the case with our cafe) opportunistically open channels instead of normal payments. Removing the “counterparty online” constraint is huge. Being able to just open it up and bet that you’ll be able to route is a big win. This is similar to “PayJoin”, whereby you try to always coin-join transactions on all payments for both privacy and fee savings.

Tomorrow, we’ll see sort of a magnum opus of using non-interactive channels, so stay tuned folks, that’s all for today.

  1. CTV + CSFS can do something like Eltoo/Decker channels with a script like CTV <pk> CSFSV

  2. There are some caveats to this, but it should generally work when you’re making payments in one direction. 

  3. Writing 27 posts is really hard and a big crunch, so I’m permitting myself a little micro-bullshit in that I’m not actually compiling this code so it probably has some bugs and stuff, but it should “read true” for the most part. I may clean this post up in the future and make sure everything works perfectly as described. 



Payment Pools / Coin Pools

Day 13: Rubin's Bitcoin Advent Calendar

Welcome to day 13 of my Bitcoin Advent Calendar. You can see an index of all the posts here or subscribe at judica.org/join to get new posts in your inbox

Payment Pools are a general concept for a technique to share a single UTXO among a group. They’ve been discussed for a couple years1, but now that Taproot is active are definitely more relevant! In this post we’ll go through some really simple Payment Pool designs before turning it up a little bit :)

Mechanistically, all that is required of a Payment Pool is that:

  1. It’s a single (shared) UTXO2
  2. Every user can get their funds out unilaterally3
  3. A set4 of users can authorize spend the funds
  4. Unspent funds/change stays in the pool

Why Pool?

Pools are really great for a number of reasons. In particular, Payment Pools are fantastic for Scalability since they mean 1 utxo can serve many masters, and also each txn only requires one signature to make a batched payment from a group. Payment Pools are kinda a killer version of a coin-join where you roll the funds from coinjoin to coinjoin automatically5, giving you great privacy. We’ll also see how they benefit decentralization in a couple of days.

What’s the simplest design that can satisfy this?

Imagine a coin that is either N-of-N multisig OR a transaction distributing the coins to all users. The Sapio would look a bit like this:

struct SimplePool {
    /// list of all initial balances
    members: HashMap<PublicKey, Amount>
}

impl SimplePool {
    /// Send their balances to everyone
    #[then]
    fn ejection(self, ctx: Context) {
        let mut t = ctx.template();
        for (key, amount) in self.members.iter() {
            t = t.add_output(amt, &key, None)?;
        }
        t.into()
    }

    /// all signed the transaction!
    #[guard]
    fn all_signed(self, ctx: Context) {
        Clause::Threshold(self.members.len(),
                          self.members
                              .keys()
                              .map(Clause::Key)
                              .collect())
    }
}

impl Contract for SimplePool {
    declare!{then, Self::ejection}
    declare!{finish, Self::all_signed}
}

Let’s check our list:

  1. It’s a single UTXO – Check
  2. Every user can get their funds out unilaterally – Check, with SimplePool::ejection
  3. A set of users can authorize spend the funds – Check, unanimously
  4. Unspent funds/change stay in the pool – We’ll give this a Check, just don’t sign transaction that don’t meet this contstraint.

So we’re good! This is all we need.

But is it really all we need?

It’d be nice if the Payment Pool had a little bit more structure around the updating so that a little bit less was left to the user to do correctly. Luckily, Sapio has tools for that. Let’s define a transition function in Sapio that generates what we should do with Simple::all_signed.

The transition function should take a list of signed updates per participant and generate a transaction for signing (signing the inputs helps with coordinating not signing the incorrect transaction). Any leftover funds should be sent into a new instance of the Payment Pool for future use.

We’ll also make one more change for efficient ejections: In the version I gave above, the unilateral ejection option exits everyone out of the pool, which kinda sucks.

However, we will ‘hybridize’ the payment pool with the tree payment. Then, you would have “hierarchical” pools whereby splitting would keep pools alive. E.g., if you had 30 people in a pool with a splitting radix of 2, 1 person force-ejecting themselves would create something like 1 pool of size 15, 1 pool of size 7, 1 pool of size 4, 1 pool of size 2, and 2 ejected people. They can always re-join a pool again after!

First, we’ll define the basic Pool data and interface:

#[derive(Deserialize, JsonSchema, Clone)]
struct NextTxPool {
    /// map of all initial balances as PK to BTC
    members: BTreeMap<PublicKey, AmountF64>,
    /// The current sequence number (for authenticating state updates)
    sequence: u64,
    /// If to require signatures or not (debugging, should be true)
    sig_needed: bool,
}

impl Contract for NextTxPool {
    declare! {then, Self::ejection}
    declare! {updatable<DoTx>, Self::do_tx}
}

Now we’ll define the logic for ejecting from the pool:

impl NextTxPool {
    /// Sum Up all the balances
    fn total(&self) -> Amount {
        self.members
            .values()
            .cloned()
            .map(Amount::from)
            .fold(Amount::from_sat(0), |a, b| a + b)
    }
    /// Only compile an ejection if the pool has other users in it, otherwise
    /// it's base case.
    #[compile_if]
    fn has_eject(self, ctx: Context) {
        if self.members.len() > 1 {
            ConditionalCompileType::Required
        } else {
            ConditionalCompileType::Never
        }
    }
    /// Split the pool in two -- users can eject multiple times to fully eject.
    #[then(compile_if = "[Self::has_eject]")]
    fn ejection(self, ctx: Context) {
        let mut t = ctx.template();
        let mid = (self.members.len() + 1) / 2;
        // find the middle
        let key = self.members.keys().nth(mid).expect("must be present");
        let mut pool_one: NextTxPool = self.clone();
        pool_one.sequence += 1;
        let pool_two = NextTxPool {
            // removes the back half including key
            members: pool_one.members.split_off(&key),
            sequence: self.sequence + 1,
            sig_needed: self.sig_needed,
        };
        let amt_one = pool_one.total();
        let amt_two = pool_two.total();
        t.add_output(amt_one, &pool_one, None)?
            .add_output(amt_two, &pool_two, None)?
            .into()
    }
}

Next, we’ll define some data types for instructing the pool to update:

/// Payment Request
#[derive(Deserialize, JsonSchema)]
struct PaymentRequest {
    /// # Signature
    /// hex encoded signature of the fee, sequence number, and payments
    hex_der_sig: String,
    fee: AmountF64,
    payments: BTreeMap<Address, AmountF64>,
}
/// New Update message for generating a transaction from.
#[derive(Deserialize, JsonSchema)]
struct DoTx {
    /// # Payments
    /// A mapping of public key in members to signed list of payouts with a fee rate.
    payments: HashMap<PublicKey, PaymentRequest>,
}
/// required...
impl Default for DoTx {
    fn default() -> Self {
        DoTx {
            payments: HashMap::new(),
        }
    }
}
impl StatefulArgumentsTrait for DoTx {}

/// helper for rust type system issue
fn default_coerce(
    k: <NextTxPool as Contract>::StatefulArguments,
) -> Result<DoTx, CompilationError> {
    Ok(k)
}

Lastly, we’ll define the logic for actually doing the update:

impl NextTxPool {
    /// all signed the transaction!
    #[guard]
    fn all_signed(self, ctx: Context) {
        Clause::Threshold(
            self.members.len(),
            self.members.keys().cloned().map(Clause::Key).collect(),
        )
    }
    /// This Function will create a proposed transaction that is safe to sign
    /// given a list of data from participants.
    #[continuation(
        guarded_by = "[Self::all_signed]",
        coerce_args = "default_coerce",
        web_api
    )]
    fn do_tx(self, ctx: Context, update: DoTx) {
        // don't allow empty updates.
        if update.payments.is_empty() {
            return empty();
        }
        // collect members with updated balances here
        let mut new_members = self.members.clone();
        // verification context
        let secp = Secp256k1::new();
        // collect all the payments
        let mut all_payments = vec![];
        let mut spent = Amount::from_sat(0);
        // for each payment...
        for (
            from,
            PaymentRequest {
                hex_der_sig,
                fee,
                payments,
            },
        ) in update.payments.iter()
        {
            // every from must be in the members
            let balance = self
                .members
                .get(from)
                .ok_or(CompilationError::TerminateCompilation)?;
            let new_balance = Amount::from(*balance)
                - (payments
                    .values()
                    .cloned()
                    .map(Amount::from)
                    .fold(Amount::from_sat(0), |a, b| a + b)
                    + Amount::from(*fee));
            // check for no underflow
            if new_balance.as_sat() < 0 {
                return Err(CompilationError::TerminateCompilation);
            }
            // updates the balance or remove if empty
            if new_balance.as_sat() > 0 {
                new_members.insert(from.clone(), new_balance.into());
            } else {
                new_members.remove(from);
            }

            // collect all the payment
            for (address, amt) in payments.iter() {
                spent += Amount::from(*amt);
                all_payments.push(Payment {
                    address: address.clone(),
                    amount: Amount::from(*amt).into(),
                })
            }
            // Check the signature for this request
            // came from this user
            if self.sig_needed {
                let mut hasher = sha256::Hash::engine();
                hasher.write(&self.sequence.to_le_bytes());
                hasher.write(&Amount::from(*fee).as_sat().to_le_bytes());
                for (address, amt) in payments.iter() {
                    hasher.write(&Amount::from(*amt).as_sat().to_le_bytes());
                    hasher.write(address.script_pubkey().as_bytes());
                }
                let h = sha256::Hash::from_engine(hasher);
                let m = Message::from_slice(&h.as_inner()[..]).expect("Correct Size");
                let signed: Vec<u8> = FromHex::from_hex(&hex_der_sig)
                    .map_err(|_| CompilationError::TerminateCompilation)?;
                let sig = Signature::from_der(&signed)
                    .map_err(|_| CompilationError::TerminateCompilation)?;
                let _: () = secp
                    .verify(&m, &sig, &from.key)
                    .map_err(|_| CompilationError::TerminateCompilation)?;
            }
        }
        // Send any leftover funds to a new pool
        let change = NextTxPool {
            members: new_members,
            sequence: self.sequence + 1,
            sig_needed: self.sig_needed,
        };
        // We'll use the contract from our last post to make the state
        // transitions more efficient!
        // Think about what else could be fun here though...
        let out = TreePay {
            participants: all_payments,
            radix: 4,
        };
        ctx.template()
            .add_output(change.total(), &change, None)?
            .add_output(spent, &out, None)?
            .into()
    }
}

Now it’s pretty neat – rather than “exercise for the reader”, we can have Sapio generate payment pool updates for us. And exiting from the pool is very efficient and keeps most users online. But speaking of exercises for the reader, try thinking through these extensions6

No Code: Payout to where?

Payouts in this version are defined as being to an address.

How creative can we get with that? What if the payment request is 1 BTC to address X and we generated X as a 1 BTC expecting Vault in Sapio?

What else cool can we do?

Cut-through

We could make our DoTx differentiate between internal and external payouts. An internal payout would allow for adding a new key OR for increasing the balance of an existing key before other payments are processed. E.g., suppose we have Alice with 1 BTC and Bob with 2, under the code above Alice sending 0.5 to Bob and Bob sending 2.1 to Carol externally would fail and would remove funds from the pool. If we want to keep funds in the pool, we can do that! And if we want the balance from new internal transfers, could process before any deductions.

Internal tranfers to multiple addresses per user can also be used to improve privacy!

Adding Inputs

It should also be possible to have external inputs add balance to the pool during any state update.

Fees?

I basically glance over fees in this presentation… But there is more work to be done to control and process fees fairly!

Cold-er Ejections

If you get kicked out of a pool because you went offline, might you be able to specify – per user – some sort of vault program for the evicted coins to go into?

Howdy Partner

Who is next to whom is actually kinda relevant for a Pool with Efficient Ejections.

For example, if the pool splits because of an undersea cable breaking off France and Britain, dividing users based on English or French would be much better than random because after one transaction you could have all the English and French users split and able to communicate again.

What different heuristics might you group people by? Reputation system? Amount of funds at stake? Random? Sorted lexicographically?

Let’s look at some pictures:

Creating a Pool

Pool Created!

Inspecting the Root

Entering an update

Updated TX Graph

(had a ux bug, need to fix it before I add this :p)

Do Payment Pools Need CTV?

Not necessarily. Payment pools as shown can be done today, but they require participants to use their own emulation / pre-signing servers before depositing funds.

This might not seem bad; we already need everyone online for an update, right? It’s truly not awful. However, many use cases of payment pool essentially require being able to generate a payment pool without having all of the parties online at the time of creation. E.g., imagine that your exchange matches you with reputable payment pool counterparties when you withdraw (if you request it). We’ll see the need concretely in a future post.

What about the Taproots

Unfortunately, rust-bitcoin/miniscript work on Taproot is still ongoing, so I can’t show you how cool Taproot is for this. But essentially, our Self::all_signed clauses become just a single key! And they can be non-interactively generated at every level for the tree-ejection version. This is great! It will work pretty much automatically without changing the user-code once the compiler supports taproot. Huge boon for privacy and efficiency!

Contrast this V.S….

As noted1, there are some other proposals out there.

It’s the author’s opinion that Sapio + CTV are the best form of payment pool compared to alternatives for both scalability and privacy. To fully understand why is a lot more technical than this already technical post (beleive it or not).

If you want to get into it, you can see my accounting for costs on the mailing list:

It boils down to a few things:

  1. Cheaper
  2. Simpler
  3. More Composable
  4. Better Privacy

In posts coming soon we’ll get a heck’n lot more creative with what goes inside a payment pool, including lightning, mining pools, and “daos”! But that’s all for today.

  1. Credit is boring, but I presented the ideas for them originally at SF Bitdevs in May 2019, and Greg Maxwell followed up on the concept more thoroughly in #bitcoin-wizards afterwards. Gleb and Antoine have also been thinking about it recently (under the name Coin Pools – to be honest we’ll have to duke it out since I like the name Coin Pools better than Payment Pool so unclear if it’s going to be like “payment channels” for a variety of designs or “the lightning network”…), as well as AJ/Greg with TLUV 2

  2. Debatably, one could have a protocol where it’s a number of utxos but the core idea is that it should not be 1 user to 1 utxo. 

  3. This implies that no user can block the other users. 

  4. Usually all users, not a subset. But possible to do fewer than all. 

  5. Credit to Greg Maxwell for this description. It’s potent. 

  6. please do try! I think you can :) 



Congestion Control

Day 12: Rubin's Bitcoin Advent Calendar

Welcome to day 12 of my Bitcoin Advent Calendar. You can see an index of all the posts here or subscribe at judica.org/join to get new posts in your inbox

Congestion is an ugly word, eh? When I hear it my fake synthesia triggers green slime feeling, being stuck in traffic with broken AC, and ~the bread line~ waiting for your order at a crowded restaurant when you’re super starving. All not good things.

So Congestion Control sounds pretty sweet right? We can’t do anything about the demand itself, but maybe we can make the experience better. We can take a mucinex, drive in the HOV lane, and eat the emergency bar you keep in your bag.

How might this be used in Bitcoin?

  1. Exchange collects N addresses they need to pay some bitcoin
  2. Exchange inputs into this contract
  3. Exchanges gets a single-output transaction, which they broadcast with high fee to get quick confirmation.
  4. Exchange distributes the redemption paths to all recipients (e.g. via mempool, email, etc).
  5. Users verify that the funds are “locked in” with this contract.
  6. Party
  7. Over time, when users are willing to pay fees, they CPFP pay for their redemptions (worst case cost \(O(\log N)\))

Throughout this post, we’ll show how to build the above logic in Sapio!


Before we get into that…

Talk Nerdy To Me

Let’s define some core concepts… Don’t worry too much if these are a bit hard to get, it’s just useful context to have or think about.

Latency

Latency is the time from some notion of “started” to “stopped”. In Bitcoin you could think of the latency from 0 confirmations on a transaction (in mempool) to 1 confirmation (in a block), which is minimally expected to be 10 minutes for high fee transactions, but could be longer depending on the other transactions.

Fairness

Fairness is a measure of how “equitable” a distribution of goods or services is. For example, suppose I want to divide 10 cookies among 10 children.

What if 1 child gets two cookies and the other 9 get 8/9ths of a cookie each? Or what if 1 child gets no cookie and the other 9 get 10/9ths of a cookie each? How fair is that?

Mathematicians and computer scientists love to come up with different measures of fairness to be able to quantatatively compare these scenarios and their relative fairness.

In Bitcoin we might think of different types of fairness: how long does your transaction spend in the mempool? How much fee did you pay?

Throughput & Capacity

Let’s spend another moment on fairness. Perfectly fair would be:

  1. All children get 1 cookie
  2. All children get 1/10th of 1 cookie.
  3. All children get 0 cookies.

Clearly only one of these is particularly efficient.

Thus, we don’t just want to measure fairness, we also want to measure the throughput against the capacity. The capacity is the maximum throughput, and the the throughput is essentially how many of those cookies get eaten (usually, over time). Now let’s look at our prior scenarios:

  1. All children get 1 cookie: Perfect Throughput.
  2. All children get 1/10th of 1 cookie: 1/10th Throughtput/Capacity.
  3. All children get 0 cookies: 0 Throughput :(

In this case it seems simple: why not just divide the cookies you big butt!

Well sometimes it’s hard to coordinate the sharing of these resources. For example, think about if the cookies had to be given out in a buffet. The first person might just take two cookies, not aware there were other kids who wouldn’t get one!

This maps well onto the Bitcoin network. A really rich group of people might do a bunch of relatively high fee transactions that are low importance to them and inadvertently price out lower fee transactions that are more important to the sender. It’s not malicious, just a consequence of having more money. So even though Bitcoin can achieve 1MB of base transaction data every 10 minutes, that capacity might get filled with a couple big consolidation transactions instead of many transfers.

Burst & Over Provisioning

One issue that comes up in systems is that users show up randomly. How often have you been at a restaurant with no line, you order your food, and then as soon as you sit down the line has ten people in it? Lucky me, you think. I showed up at the right time!. But then ten minutes later the line is clear.

Customers show up kind of randomly. And thus we see big bursts of activity. Typically, in order to accomodate the bursts a restaurant must over-provision it’s staff. They only make money when customers are there, and they need to serve them quickly. But in between bursts, staff might just be watching grass grow.

The same is true for Bitcoin. Transactions show up somewhat unpredictably, so ideally Bitcoin would have ample space to accomodate any burst (this isn’t true).

Little’s Law

Little’s law is a deceptively simple concept:

\[L = \lambda \times W\]

where \(L = \) length of the queue, \(\lambda = \) the arrival rate and \(W=\) the average time a customer spends in the system.

What’s remarkable about it is that it makes almost no assumptions about the underlying process.

This can be used to think about, e.g., a mempool.

Suppose there are 10,000 transactions in the mempool, and based on historical data we see 57 txns a minute.

\[\frac{10,000 \texttt{ minutes}}{57 \texttt{ transactions per minute}} = 175 \texttt{ minutes}\]

Thus we can infer how long transactions will on average spend waiting in the mempool, without knowing what the bursts look like! Very cool.

I’m just showing off

I didn’t really need to make you read that gobbledygook, but I think they are really useful concepts that anyone who wants to think about the impacts of congestion & control techniques should keep in mind… Hopefully you learned something!


It’s Bitcoin Time

Well, what’s going on in Bitcoin land? When we make a transaction there are multiple different things going on.

  1. We are spending coins
  2. We are creating new coins

Currently, those two steps occur simultaneously. Think of our cookies. Imagine if we let one kid get cookies at a time, and they also have to get their milk at the same time. Then we let the next kid go. It’s going to take

\[T_{milk} + T_{cookies}\]

To get everyone served. What if instead we said kids could get one and then the other, in separate lines.

Now it will take something closer to \(\max(T_{milk}, T_{cookies})\).1 Whichever process is longer will dominate the time. (Probably milk).

Now imagine that getting a cookie takes 1 second per child, and getting a milk takes 30 seconds. Everyone knows that you can have a cookie and have milk after. If children take a random amount of time – let’s say on average 3 minutes, sometimes more, sometimes less – to eat their cookies, then we can serve 10 kids cookies in 10 seconds, making everyone happy, and then fill up the milks while everyone is enjoying a cookie. However, if we did the opposite – got milks and then got cookies, it would take much longer for all of the kids to get something and you’d see chaos.

Back to Bitcoin. Spending coins and creating new coins is a bit like milk and cookies. We can make the spend correspond to distributing the cookies and setting up the milk line. And the creating of the new coin can be more akin to filling up milks whenever a kid wants it.

What this means practically is that by unbundling spending from redeeming we can serve a much greater number of users that if they were one aggregate product because we are taking the “expensive part” and letting it happen later than the “cheap part”. And if we do this cleverly, the “setting up the milk line” in the splitting of the spend allows all receivers to know they will get their fair share later.

This makes the system much higher throughput (unlimited confirmations of transfer), lower latency to confirmation (you an see when a spend will eventually pay you), but higher latency to coin creation in the best case, although potentially no different than the average case, and (potentially) worse overall throughput since we have some waste from coordinating the splitting.

It also improves costs because we may be willing to pay a higher price for part one (since it generates the confirmation) than part two.

Can we build it?

Let’s start with a basic example of congestion control in Sapio.

First we define a payment as just being an Amount and an Address.

/// A payment to a specific address
pub struct Payment {
    /// # Amount
    /// The amount to send in btc
    pub amount: AmountF64,
    /// # Address
    /// The Address to send to
    pub address: Address,
}

Next, we’ll define a helper called PayThese, which takes a list of contracts of some kind and pays them after an optional delay in a single transaction.

You can think of this (back to our kids) as calling a group of kids at a time (e.g., table 1, then table 2) to get their cookies.

struct PayThese {
    contracts: Vec<(Amount, Box<dyn Compilable>)>,
    fees: Amount,
    delay: Option<AnyRelTimeLock>,
}
impl PayThese {
    #[then]
    fn expand(self, ctx: Context) {
        let mut bld = ctx.template();
        // Add an output for each contract
        for (amt, ct) in self.contracts.iter() {
            bld = bld.add_output(*amt, ct.as_ref(), None)?;
        }
        // if there is a delay, add it
        if let Some(delay) = self.delay {
            bld = bld.set_sequence(0, delay)?;
        }
        // pay some fees
        bld.add_fees(self.fees)?.into()
    }

    fn total_to_pay(&self) -> Amount {
        let mut amt = self.fees;
        for (x, _) in self.contracts.iter() {
            amt += *x;
        }
        amt
    }
}
impl Contract for PayThese {
    declare! {then, Self::expand}
    declare! {non updatable}
}

Lastly, we’ll define the logic for congestion control. The basics of what is happening is we are going to define two transactions: One which pays from A -> B, and then one which is guaranteed in B’s script to pay from B -> {1…n}. This splits the confirmation txn from the larger payout txn.

However, we’re going to be a little more clever than that. We’ll apply this principle recursively to create a tree.

Essentially what we are going to do is to take our 10 kids and then divide them into groups of 2 (or whatever radix). E.g.: {1,2,3,4,5,6,7,8,9,10} would become { {1,2}, {3,4}, {5,6}, {7,8}, {9,10} }. The magic happens when we recursively apply this idea, like below:

{1,2,3,4,5,6,7,8,9,10}
{ {1,2}, {3,4}, {5,6}, {7,8}, {9,10} }
{ { {1,2}, {3,4} }, { {5,6}, {7,8} }, {9,10} }
{ { {1,2}, {3,4} }, { { { 5,6}, {7,8} }, {9,10} } }
{ { { {1,2}, {3,4}}, { { {5,6}, {7,8} }, {9,10} } } }

The end result of this grouping is a single group! So now we could do a transaction to pay/give cookies to that one group, and then if we wanted 9 to get their cookie/sats We’d only have to publish:

level 0 to: Address({ { { {1,2}, {3,4} }, { { {5,6}, {7,8} }, {9,10} } } })
level 1 to: Address({ { {5,6}, {7,8} }, {9,10} } })
level 2 to: Address({9,10})

Now let’s show that in code:

/// # Tree Payment Contract
/// This contract is used to help decongest bitcoin
//// while giving users full confirmation of transfer.
#[derive(JsonSchema, Serialize, Deserialize)]
pub struct TreePay {
    /// # Payments
    /// all of the payments needing to be sent
    pub participants: Vec<Payment>,
    /// # Tree Branching Factor
    /// the radix of the tree to build.
    /// Optimal for users should be around 4 or
    /// 5 (with CTV, not emulators).
    pub radix: usize,
    #[serde(with = "bitcoin::util::amount::serde::as_sat")]
    #[schemars(with = "u64")]
    /// # Fee Sats (per tx)
    /// The amount of fees per transaction to allocate.
    pub fee_sats_per_tx: bitcoin::util::amount::Amount,
    /// # Relative Timelock Backpressure
    /// When enabled, exert backpressure by slowing down
    /// tree expansion node by node either by time or blocks
    pub timelock_backpressure: Option<AnyRelTimeLock>,
}

impl TreePay {
    #[then]
        fn expand(self, ctx: Context) {
            // A queue of all the payments to be made initialized with
            // all the input payments
            let mut queue = self
                .participants
                .iter()
                .map(|payment| {
                    // Convert the payments to an internal representation
                    let mut amt = AmountRange::new();
                    amt.update_range(payment.amount);
                    let b: Box<dyn Compilable> =
                        Box::new(Compiled::from_address(payment.address.clone(),
                        Some(amt)));
                    (payment.amount, b)
                })
                .collect::<VecDeque<(Amount, Box<dyn Compilable>)>>();

            loop {
                // take out a group of size `radix` payments
                let v: Vec<_> = queue
                    .drain(0..std::cmp::min(self.radix, queue.len()))
                    .collect();
                if queue.len() == 0 {
                    // in this case, there's no more payments to make so bundle
                    // them up into a final transaction
                    let mut builder = ctx.template();
                    for pay in v.iter() {
                        builder = builder.add_output(pay.0, pay.1.as_ref(), None)?;
                    }
                    if let Some(timelock) = self.timelock_backpressure {
                        builder = builder.set_sequence(0, timelock)?;
                    }
                    builder = builder.add_fees(self.fee_sats_per_tx)?;
                    return builder.into();
                } else {
                    // There are still more, so make this group and add it to
                    // the back of the queue
                    let pay = Box::new(PayThese {
                        contracts: v,
                        fees: self.fee_sats_per_tx,
                        delay: self.timelock_backpressure,
                    });
                    queue.push_back((pay.total_to_pay(), pay))
                }
            }
    }
}
impl Contract for TreePay {
    declare! {then, Self::expand}
    declare! {non updatable}
}

So now what does that look like when we send to it? Let’s do a TreePay with 14 recipients and radix 4:

sapio studio view of treepay

As you can see, the queuing puts some structure into a batched payment! This is (roughly) the exact same code as above generating these transactions. What this also means is given an output and a description of the arguments passed to the contract, anyone can re-generate the expansion transactions and verify that they can eventually receive their money! These payout proofs can also be delivered in a pruned form, but that’s just a bonus.

Everyone gets their cookie (confirmation of transfer) immediately, and knows they can get their milk (spendability) later. A smart wallet could manage your liquidity over pedning redemptions, so you could passively expand outputs whenever fees are cheap.


There are a lot of extensions to this basic design, and we’ll see two really exciting ones tomorrow and the next day!

If you want to read more about the impact of congestion control on the network, I previously wrote two articles simulating the impact of congestion control on the network which you can read here:

What’s great about this is that not only do we make a big benefit for anyone who wants to use it, we show in the Batching Simulation that even with the overheads of a TreePay, the incentive compatible behavior around exchange batching can actually help us use less block space overall.

  1. Simplifying here – I know Amdahl’s Law… 



Inheritence Schemes for Bitcoin

Day 11: Rubin's Bitcoin Advent Calendar

Welcome to day 11 of my Bitcoin Advent Calendar. You can see an index of all the posts here or subscribe at judica.org/join to get new posts in your inbox

You are going to die.

Merry Christmas! Hopefully not any time soon, but one of these days you will shuffle off this mortal coil.

When that day comes, how will you give your loved ones your hard earned bitcoin?

You do have a plan, right?

This post is a continuation of the last post on Vaults. Whereas Vaults focus on trying to keep your coins away from someone, Inheritance focuses on making sure someone does get your coins. Basically opposites!

Basic Bitcoin Plans

Let’s say you’re a smarty pants and you set the following system up:

(2-of-3 Multisig of my keys) OR (After 1 year, 3-of-5 Multisig of my 4 family members keys and 1 lawyer to tie break)

Under this setup, you can spend your funds secured by a multisig. You have to spend them once a year to keep your greedy family away, but that’s OK.

Until one day, you perish in a boating accident (shouldn’t have gone to that Flamin’ Hot Cheetos Yach Party in Miami).

A year goes by, no one knows where your 2-of-3 keys are, and so the family’s backup keys go online.

They raid your files and find a utxoset backup with descriptors and know how to combine their keys (that you made for them most likely…) with offline signing devices to sign a PSBT, and the money comes out.

If the family can’t agree, a Lawyer who has your will can tie break the execution.

Except wait…

Your kids are assholes, just like your spouse

So your piece of shit husband/wife doesn’t think the kids should get anything (RIP college fund), so count them out on signing the tuition payments.

Now we’re down to your 3 kids agreeing and your 1 lawyer.

Your Lawyer thinks your spouse has a bit of a case, so the whole things in probate as far as they are concerned.

And the kids? Well, the kids don’t want to go to college. You just gifted them 42069 sats each, enough to pay for a ticket on Elon Musk’s spaceship. So they get together one night, withdraw all the money, and go to Mars. Or the Casino. Little Jimmy has never seen so much money, so he goes to Vegas for a last huzzah before the Mars trip, but he blows it all. So Jimmy stays behind, satless, and the other kids go to mars.

Well That Sucked

And it didn’t have to! What if you could express your last will and testament in Bitcoin transactions instead of in messy messy multisigs. You Can! Today! No new features required (although they’d sure be nice…).


Building Inheritence Schemes with Sapio

You can make inheritence schemes with Sapio! While it does benefit from having CTV enabled for various reasons, technically it can work decently without CTV by pre-signing transactions with a CTV emulator.

Here we’ll develop some interesting primitives that can be used to make various inheritence guarantees.

Making a better Dead Man Switch

First off, let’s make a better dead man switch. Recall we had to move our funds once a year because of the timelocks.

That was dumb.

Instead, let’s make a challenge of liveness! (again, deep apologies on these examples, I’m a bit behind on the series so haven’t checked as closely as I would usually…)

/// Opening state of a DeadManSwitch
#[derive(Clone)]
struct Alive {
    /// Key needed to claim I'm dead
    is_dead: bitcoin::PublicKey,
    /// If someone says i'm dead but I'm alive, backup wallet address
    is_live: bitcoin::Address,
    /// My normal spending key (note: could be a Clause instead...)
    key: bitcoin::PublicKey,
    /// How long you have to claim you're not dead
    timeout: RelTime,
    /// Addresses for CPFP Anchor Outputs
    is_dead_cpfp: bitcoin::Address,
    is_live_cpfp: bitcoin::Address,
}

impl Alive {
    #[guard]
    fn is_dead_sig(self, ctx: Context) {
        Clause::Key(self.is_dead.clone())
    }
    /// only allow the is_dead key to transition to a CheckIfDead 
    #[then(guarded_by="[Self::is_dead_sig]")]
    fn am_i_dead(self, ctx: Context) {
        let dust = Amount::from_sat(600);
        let amt = ctx.funds();
        ctx.template()
            // Send all but some dust to CheckIfDead
            .add_output(amt - dust, &CheckIfDead(self.clone()), None)?
            // used for CPFP
            .add_output(
                dust,
                &Compiled::from_address(self.is_dead_cpfp.clone(), None),
                None,
            )?
            .into()
    }
    /// Allow spending like normal
    #[guard]
    fn spend(self, ctx: Context) {
        Clause::Key(self.key.clone())
    }
}

impl Contract for Alive {
    declare! {finish, Self::spend}
    declare! {then, Self::am_i_dead}
}

/// All the info we need is in Alive struct already...
struct CheckIfDead(Alive);
impl CheckIfDead {
    /// we're dead after the timeout and is_dead key signs to take the money
    #[guard]
    fn is_dead(self, ctx: Context) {
        Clause::And(vec![Clause::Key(self.0.is_dead.clone()), self.0.timeout.clone().into()])
    }

    /// signature required for liveness claim
    #[guard]
    fn alive_auth(self, ctx: Context) {
        Clause::Key(self.key.clone())
    }
    /// um excuse me i'm actually alive
    #[then(guarded_by="[Self::alive_auth]")]
    fn im_alive(self, ctx: Context) {
        let dust = Amount::from_sat(600);
        let amt = ctx.funds();
        ctx.template()
            /// Send funds to the backup address!
            .add_output(
                amt - dust,
                &Compiled::from_address(self.0.is_live.clone(), None),
                None,
            )?
            /// Dust for CPFP-ing
            .add_output(
                dust,
                &Compiled::from_address(self.0.is_live_cpfp.clone(), None),
                None,
            )?
            .into()
    }
}

impl Contract for CheckIfDead {
    declare! {finish, Self::is_dead}
    declare! {then, Self::im_alive}
}

In this example, the funds start in a state of Alive, until a challenger calls Alive::am_i_dead or the original owner spends the coin. After the call of Alive::am_i_dead, the contract transitions to CheckIfDead state. From this state, the owner has timeout (either time or blocks) time to move the coin to their key, or else the claimer of the death can spend using CheckIfDead::is_dead.

Of course, we can clean up this contract in various ways (e.g., making the destination if dead generic). That could look something like this:

struct Alive {
    is_dead_cpfp: bitcoin::Address,
    is_live_cpfp: bitcoin::Address,
    // note that this permits composing Alive with some arbitrary function
    is_dead: &dyn Fn(ctx: Context, cpfp: bitcoin::Address) -> TxTmplIt,
    is_live: bitcoin::Address,
    key: bitcoin::PublicKey,
    timeout: RelTime,
}

impl CheckIfDead {
    #[then]
    fn is_dead(self, ctx: Context) {
        self.0.is_dead(ctx, self.0.is_dead_cpfp.clone())
    }
}

This kind of dead man switch is much more reliable than having slowly eroding timelocks since it doesn’t require regular transaction refreshing, which was the source of a bug in Blockstream’s federation code. It also requires an explicit action to claim a lack of liveness, which also gives information about the trustworthiness of your kids (or any exploits of their signers).

Not so fast

What if we want to make sure that little Jimmy and his gambling addiction don’t blow it all at once… Maybe if instead of giving Jimmy one big lump sum, we could give a little bit every month. Then maybe he’d be better off! This is basically an Annuity contract.

Now let’s have a look at an annuity contract.

struct Annuity {
    to: bitcoin::PublicKey,
    amount: bitcoin::Amount,
    period: AnyRelTime
}

const MIN_PAYOUT: bitcoin::Amount = bitcoin::Amount::from_sat(10000);
impl Annuity {
    #[then]
    fn claim(self, ctx:Context) {
        let amt = ctx.funds();
        // Basically, while there are funds left this contract recurses to itself,
        // until there's only a little bit left over.
        // No need for CPFP since we can spend from the `to` output for CPFP.
        if amt - self.amount > MIN_PAYOUT {
            ctx.template()
                .add_output(self.amount, &self.to, None)?
                .add_output(amt - self.amount, &self, None)?
                .set_sequence(-1, self.period.into())?
                .into()
        } else if amt > 0 {
            ctx.template()
                .add_output(amt, &self.to, None)?
                .set_sequence(-1, self.period.into())?
                .into()
        } else {
            // nothing left to claim
            empty()
        }
    }
}

We could instead “transpose” an annuity into a non-serialized form. This would basically be a big transaction that has N outputs with locktimes on claiming each. However this has a few drawbacks:

  1. Claims are non-serialized, which means that relative timelocks can only last at most 2 years. Therefore only absolute timelocks may be used.

  2. You might want to make it possible for another entity to counterclaim Jimmy’s funds back, perhaps if he also died (talk about bad luck). In the transposed version, you would need to make N proof-of-life challenges v.s. just one1.

  3. You would have to pay more fees all at once (although less fees overall if feerates increase or stay flat).

  4. It’s less extensible – for example, it would be possible to do a lot of cool things with serialization of payouts (e.g., allowing oracles to inflation adjust payout rate).

Splits

Remember our annoying spouse, bad lawyer, etc? Well, instead of giving them a multisig, imagine we use the split function as the end output from our CheckIfDead:

fn split(ctx: Context, cpfp: bitcoin::Address) -> TxTmplIt {
    let dust = Amount::from_sat(600);
    let amt = ctx.funds() - dust;
    let mut ctx.template()
       .add_output(dust, &Compiled::from_address(cpfp, None), None)?
       .add_output(amt*0.5, &from_somewhere::spouse_annuity, None)?
       .add_output(amt * 0.1666, &from_somewhere::kids_annuity[0], None)?
       .add_output(amt*0.1666, &from_somewhere::kids_annuity[1], None)?
       .add_output(amt*0.1666, &from_somewhere::kids_annuity[2], None)?
       .into()
}

This way we don’t rely on any pesky disagreement over what to sign, the funds are split exactly how we like.

Oracles and Lawyers

Lastly, it is possible to bake into these contracts all sorts of conditionallity.

For example, imagine an Annuity that only makes payouts if a University Attendance Validator signs your tuition payment, otherwise you get the coins on your 25th Birthday.

struct Tuition {
    /// keep this key secret from the school
    to: bitcoin::PublicKey,
    enrolled: bitcoin::PublicKey,
    school: bitcoin::PublicKey,
    amount: bitcoin::Amount,
    period: AnyRelTime,
    birthday: AbsTime,
}

const MIN_PAYOUT: bitcoin::Amount = bitcoin::Amount::from_sat(10000);
impl Tuition {
    #[guard]
    fn enrolled(self, ctx: Context) {
        Clause::And(vec![Clause::Key(self.enrolled), Clause::Key(self.to)])
    }
    #[then(guarded_by="[Self::enrolled]")]
    fn claim(self, ctx:Context) {
        let amt = ctx.funds();
        if amt - self.amount > MIN_PAYOUT {
            // send money to school
            ctx.template()
                .add_output(self.amount, &self.enrolled, None)?
                .add_output(amt - self.amount, &self, None)?
                .set_sequence(-1, self.period.into())?
                .into()
        } else if amt > 0 {
            // give the change to child
            ctx.template()
                .add_output(amt, &self.to, None)?
                .set_sequence(-1, self.period.into())?
                .into()
        } else {
            empty()
        }
    }
    #[guard]
    fn spend(self, ctx: Context) {
        Clause::And(vec![self.birthday.into(), Clause::Key(self.to)])
    }
}

The oracle can’t really steal funds here – they can only sign the already agreed on txn and get the tuition payment to the “school” network. And on the specified Birthday, if not used for tuition, the funds go to the child directly.

Where do these live?

In theory what you’d end up doing is attaching these to every coin in you wallet under a dead-man switch.

Ideally, you’d put enough under your main “structured” splits that you’re not moving all to often and then you would have the rest go into less structured stuff. E.g., the college fund coins you might touch less frequently than the coins for general annuity. You can also sequence some things using absolute timelocks, for example.

In an ideal world you would have a wallet agent that is aware of all your UTXOs and your will and testament state and makes sure to regenerate the correct conditions whenever you spend and then store them durably, but that’s a bit futuristic for the time being. With CTV the story is a bit better, as for many designs you could distribute a WASM bundle for your wallet to your family and they could use that to generate all the transactions given an output, without needing to have every presigned transaction saved.

This does demonstrate a relative strength for the account model, it’s much easier to keep all your funds in once account and write globally correct inheritence vault logic around it for all your funds, computed across percentages. No matter the UTXO model covenant, that someone might have multiple UTXOs poses an inherent challenge in doing this kind of stuff properly.

What else?

Well, this is just a small sampling of things you could do. Part of the power of Sapio is that I hope you’re feeling inspired to make your own bespoke inhertience scheme in it! No one size fits all, ever, but perhaps with the power of Sapio available to the world we’ll see a lot more experimentation with what’s possible.


Till next time – Jeremy.

  1. Note this is a case where unrolling can be used, but the contract sizes can blow up kinda quick, so careful programming might be needed or you might need to say that it can only be claimed that Jimmy is dead once or twice before he just gets all the money. Recursive covenants would not nescessarily have this issue. 



Building Vaults on Bitcoin

Day 10: Rubin's Bitcoin Advent Calendar

Welcome to day 10 of my Bitcoin Advent Calendar. You can see an index of all the posts here or subscribe at judica.org/join to get new posts in your inbox

A “Vault” is a general concept for a way of protecting Bitcoin from theft through a cold-storage smart contract. While there is not formal definition of what is and is not a Vault, generally a Vault has more structure around a withdrawal than just a multisig.

One of the earlier references for Vaults was a design whereby every time you request to withdraw from it you can “reset” the request within a time limit. This means that while an attacker might steal your keys, you can “fight” to make it a negative sum game – e.g., they’ll just keep on paying fees to eventually steal an amount less than they paid. This might serve to disincentivize hacking exchanges if hackers are less likely to actually get coins.

Similar Vaults can be built using Sapio, but the logic for them involves unrolling the contract a predefined number of steps. This isn’t bad because if the period of timeout is 1 week then just unrolling 5,200 times gets you one thousand years of hacking disincentive.

The contract for that might look something like this in Sapio (note: I was running behind on this post so I may make modifications to make these examples better later):

struct VaultOne {
    /// Key that will authorize:
    /// 1) Recursing with the vault
    /// 2) Spending from the vault after not moved for a period
    key: bitcoin::PublicKey,
    /// How long should the vault live for
    steps: u32,
}

impl VaultOne {
    /// Checks if steps are remaining
    #[compile_if]
    fn not_out_of_steps(self, ctx: Context) {
        if self.steps == 0 {
            ConditionalCompileType::Never
        } else {
            ConditionalCompileType::NoConstraint
        }
    }

    #[guard]
    fn authorize(self, ctx: Context) {
        Clause::Key(self.key.clone())
    }

    /// Recurses the vault if authorized
    #[then(compile_if = "[Self::not_out_of_steps]", guarded_by = "[Self::authorize]")]
    fn step(self, ctx: Context) {
        let next = VaultOne {
            key: self.key.clone(),
            steps: self.steps - 1,
        };
        let amt = ctx.funds();
        ctx.template()
            .add_output(amt, &next, None)?
            // For Paying fees via CPFP. Note that we should totally definitely
            // get rid of the dust limit for contracts like this, or enable
            // IUTXOS with 0 Value
            .add_output(Amount::from_sat(0), &self.key, None)?
            .into()
    }
    /// Allow spending after a week long delay
    #[guard]
    fn finish(self, ctx: Context) {
        Clause::And(vec![
            Clause::Key(self.key.clone()),
            RelTime::try_from(Duration::from_secs(7 * 24 * 60 * 60))
                .unwrap()
                .into(),
        ])
    }
}
/// Binds the logic to the Contract
impl Contract for VaultOne {
    declare! {then, Self::step}
    declare! {finish, Self::finish}
}

But we can also build much more sophisticated Vaults that do more. Suppose we want to have a vault where once a week you can claim a trickle of bitcoin into a hot wallet, or you can send it back to a cold storage key. This is a “structured liquidity vault” that gives you time-release Bitcoin. Let’s check out some code and talk about it more:

#[derive(Clone)]
struct VaultTwo {
    /// Key just for authorizing steps
    authorize_key: bitcoin::PublicKey,
    amount_per_step: bitcoin::Amount,
    /// Hot wallet key
    hot_key: bitcoin::PublicKey,
    /// Cold wallet key
    cold_key: bitcoin::PublicKey,
    steps: u32,
}

impl VaultTwo {
    #[compile_if]
    fn not_out_of_steps(self, ctx: Context) {
        if self.steps == 0 {
            ConditionalCompileType::Never
        } else {
            ConditionalCompileType::NoConstraint
        }
    }

    #[guard]
    fn authorized(self, ctx: Context) {
        Clause::Key(self.authorize_key.clone())
    }
    #[then(compile_if = "[Self::not_out_of_steps]", guarded_by = "[Self::authorized]")]
    fn step(self, ctx: Context) {
        // Creates a recursive vault with one fewer steps
        let next = VaultTwo {
            steps: self.steps - 1,
            ..self.clone()
        };
        let amt = ctx.funds();
        ctx.template()
            // send to the new vault
            .add_output(amt - self.amount_per_step, &next, None)?
            // withdraw some to hot storage
            .add_output(self.amount_per_step, &self.hot_key, None)?
            // For Paying fees via CPFP. Note that we should totally definitely
            // get rid of the dust limit for contracts like this, or enable
            // IUTXOS with 0 Value
            .add_output(Amount::from_sat(0), &self.authorize_key, None)?
            // restrict that we have to wait a week
            .set_sequence(
                -1,
                RelTime::try_from(Duration::from_secs(7 * 24 * 60 * 60))?.into(),
            )?
            .into()
    }
    /// allow sending the remaining funds into cold storage
    #[then(compile_if = "[Self::not_out_of_steps]", guarded_by = "[Self::authorized]")]
    fn terminate(self, ctx: Context) {
        ctx.template()
            // send the remaining funds to cold storage
            .add_output(self.amount_per_step*self.steps, &self.cold_key, None)?
            // For Paying fees via CPFP. Note that we should totally definitely
            // get rid of the dust limit for contracts like this, or enable
            // IUTXOS with 0 Value
            .add_output(Amount::from_sat(0), &self.authorize_key, None)?
            .into()
    }
}

impl Contract for VaultTwo {
    declare! {then, Self::step, Self::terminate}
}

This type of Vault is particularly interesting for e.g., withdrawing from an exchange business. Imagine a user, Elsa who wants to have a great cold storage system. So Elsa sets up a xpub key and puts it on ice. She then generates a new address, and requests that the exchange let the funds go to it. Later that month, Elsa wants to buy a coffee with her Bitcoin so she has to thaw out her cold storage to spend (maybe using a offline PSBT signing), and transfer the funds to her destination or to a hot wallet if she wants a bit of extra pocket money. Instead suppose Elsa sets up a timerelease vault. Then, she can set up her cold vault and automatically be able to claim 1 Bitcoin a month out of it, or if she notices some coins missing from her hot wallet redirect the funds solely under her ice castle.

This has many benefits for an average user. One is that you can invest in your cold storage of keys once in your life and only have to access it in unexpected circumstance. This means that: users might elect to use something more secure/inconvenient to access (e.g. strongly geo-sharded); that they won’t reveal access patterns by visiting their key storage facility; and that they don’t need to expose themselves to recurring fat-finger1 risk.

Getting a little more advanced

What are some other things we might want to do in a vault? Let’s do a quickfire – we won’t code these here, but you’ll see examples of these techniques in posts to come:

Send a percentage, not a fixed amount

Let the contract know the intended amount, and then compute the withdrawals as percentages in the program.

Non-Key Destinations

In the examples above, we use keys for hot wallet, cold wallet, and authorizations.

However, we could very well use other programs! For example, imagine a time-release vault that goes into a anti-theft locker.

Change Hot Wallet Every Step

This one is pretty simple – if you have N steps just provide a list of N different destinations and use the i-th one as you go!

Topping up:

There are advanced techniques that can be used to allow depositing into a vault after it has been created (i.e., topping up), but that’s too advanced to go into detail today. For those inclined, a small hint: make the “top up” vault consume an output from the previous vault, CTV commits to the script so you can use a salted P2SH out.

Even more advanced

What if we want to ensure that after a withdraw funds are re-inserted into the Vault?

We’ll ditch the recursion (for now), and just look at some basic logic. Imagine a coin is held by a cold storage key, and we want to use Sapio to generate a transaction that withdraws funds to an address and sends the rest back into cold storage.

struct VaultThree {
    key: bitcoin::PublicKey,
}

/// Special struct for passing arguments to a created contract
enum Withdrawal {
    Send {
        addr: bitcoin::Address,
        amount: bitcoin::Amount,
        fees: bitcoin::Amount,
    },
    Nothing,
}
/// required...
impl Default for Withdrawal {
    fn default() -> Self {
        Withdrawal::Nothing
    }
}
impl StatefulArgumentsTrait for Withdrawal {}

/// helper for rust type system issue
fn default_coerce(
    k: <VaultThree as Contract>::StatefulArguments,
) -> Result<Withdrawal, CompilationError> {
    Ok(k)
}

impl VaultThree {
    #[guard]
    fn signed(self, ctx: Context) {
        Clause::Key(self.key.clone())
    }
    #[continuation(guarded_by = "[Self::signed]", coerce_args = "default_coerce")]
    fn withdraw(self, ctx: Context, request: Withdrawal) {
        if let Withdrawal::Send { amount, fees, addr } = request {
            let amt = ctx.funds();
            ctx.template()
                // send the rest recursively to this contract
                .add_output(amt - amount - fees, self, None)?
                // process the withdrawal
                .add_output(amount, &Compiled::from_address(addr, None), None)?
                // mark fees as spent
                .spend_amount(fees)?
                .into()
        } else {
            empty()
        }
    }
}
impl Contract for VaultThree {
    declare! {updatable<Withdrawal>, Self::withdraw}
}

Now we’ve seen how updatable continuation clauses can be used to dynamically pass arguments to a Sapio contract and let the module figure out what the next transactions should be, managing recursive and non-enumerated state transitions (albeit with a trust model).


That’s probably enough for today, before I make your head explode. We’ll see more examples soon!

  1. Sending the wrong amount because you click the wrong key with your too-large hands. 


© 2011-2021 Jeremy Rubin. All rights reserved.