S4:E4 Hart Lambur of Across Protocol - Intents and the Future of Interop

Katherine Wu, Nick Pai

On this episode of Archebyte, Katherine Wu and fellow Archetype team member, Nick Pai, are joined by the CEO and co-founder of Risk Labs, Hart Lambur, to talk bridges, L2s, and interoperability.

Hart and Risk Labs are best known for creating UMA, an optimistic oracle, and Across Protocol, a crosschain bridge. In the multichain future that is quickly becoming the present, infrastructure like oracles, bridges, and sequencers play crucial roles. But, these tools that are meant to facilitate interoperability between blockchains still have plenty of room for improvement.

During our conversation, we dig into the problems that bridges currently face, how Across is designed to mitigate them, and what a safer multichain ecosystem looks like. Hart and Nick give their thoughts on centralized, decentralized, and shared sequencers, intents, and the role aggregators play in regards to bridges. They also answer the loaded question - are bridges L2s?

📬 To keep up with the latest from Archebyte and receive articles and research from the rest of the Archetype team, subscribe to our newsletter: http://eepurl.com/iCApL2

- - - - - - - -

TIMESTAMPS

0:00 Intro

2:10  The bridging problem

5:30 How Across works

8:15 Making safe bridges

15:31 L2s and centralized vs decentralized sequencers

20:05 Sequencer misconceptions?

23:35 Aggregators in bridging

27:03 L2 metrics for success

👋 FOLLOW US

Hart: https://twitter.com/hal2001 

Nick: https://twitter.com/mountainwaterpi 

UMA: https://uma.xyz/
Across: https://across.to/ 

Archetype: https://twitter.com/archetypevc

Katherine: https://twitter.com/katherineykwu 

🌐 LINKS

UMA: https://uma.xyz/
Across: https://across.to/ 

Risk Labs: https://risklabs.foundation/

📜 TRANSCRIPT

KATHERINE

Hello everyone, and welcome back to Archebyte. Every other week on Archebyte we have on some of the smartest builders and founders in the crypto industry to tell us what's top of mind for them. Today we are joined by Hart Lambur, who is the CEO and co-founder of Risk Labs. Risk Labs is most known for launching the UMA Protocol, as well as the Across Protocol.

We also have a special guest today along with Hart - Nik Pai - who joined Risk Labs in 2020 and was an early engineer for UMA Protocol and now a tech lead at Across. He is also a research partner with me at Archetype and we are very lucky to have his perspective within the firm. Now, without further ado, today's episode is about all things bridges, L2, and interop, from the taxonomy to the design choices to Nick and Hart's vision of the interop world. So welcome to the show, Nick and Hart. 

HART

Thanks, Katherine. It's going to be fun. 

NICK

Thanks, Katherine.

KATHERINE

It's going to be very fun. It's going to be very nerdy, but very fun. Now, to kick us off, actually, I'm going to turn this first question over to Nick. Give us a very high level brief overview of the bridging architecture taxonomy to start.

NICK

Yeah, so I will set the stage and I'll describe the bridging problem briefly. And then I want to propose a framework for thinking about the solutions to this problem and then hand it over to Hart.

So today, in 2023, blockchains are definitively multichain. The multichain future that we've all been talking about has arrived at this point. Many of these chains are isolated from each other, and those that are connected are either connected insecurely or securely, but slowly.

So there's this new set of applications that define the interoperability industry that try to service users who want to transfer value and information quickly, securely and permissionlessly. This is a really hard problem, as we've seen. 

The first bit about transferring information quickly is really hard because you can't really be confident about the state on the origin chain before you send it to the destination - Until the origin chain information has finalized. And usually finality can take several hours to several days.

And transferring messages securely is also really hard because you need to make sure that the message that the user wants to transfer from one chain to another is not subject to censorship by a central party. In other words, the sender should always be able to fast forward their own message to the destination.

And I think this presents a really hard UX problem in that it's actually a UX sensation in that sending a transaction of one chain to another feels magical and it's very scary to see your transaction occur on one chain and then to have to wait for it to appear in the other chain. Somewhere there ideally should be a trace of transactions from the origin to the destination such that the user can always feel secure such that they can recover their message or revert it if something happens. And if this trace of transactions from the origin chain to the destination ever disappears, it feels scary, and there's probably a trusted actor somewhere. 

So what Hart and I work on is something called a fast bridge, and fast bridges are essentially risk transfers to users. Fast bridges introduce a third party called relayer or filler, who takes on the risk of finality on the origin chain and forwards messages really quickly to the user on the destination chain.

So the relayer is taking on all of this risk and the user is going to pay them for the privilege of transferring their message across. And so the whole challenge of bridges is determining whether this relayer acted correctly and determining whether to pay them a reward. And I think there are roughly four categories, and I'll just describe them at a very high level and we can get into details.

But I think the four categories are asset type — so what kind of message is being transferred? Is it a token or just an arbitrary message? Settlement — how does the bridge actually refund or reward relayers? Is this done individually or in batch settlements? Validation — so how does the bridge validate that the bridge was transferred correctly - optimistically, validity proofs, proof of authority? And finally, for token bridges — which are a subset of bridges that Hart and I really care about — where does liquidity come from on the destination chain. So is it held passively onchain or is it brought onchain just in time from market makers? 

So I think that's how I view the four categories. I’m kind of curious if Hart has a different framework for that.

HART

Yeah, I mean, Nick you and I work together, so we probably agree a lot, and I do. I think what I'd emphasize and add to your analysis here is just that most bridges follow this paradigm. They aren't this intense space bridging architecture that we work on at Across. 

So maybe to describe like the other bridges, it's basically deposit asset on origin chain, and then something happens, some magic middle thing happens where a message is sent from origin to destination, and then funds are released on destination chain.

And like you said, there are problems here where that something magic in the middle happening - if you're sending a message - it's costly, like even just gas costs, or it's insecure, or it takes time. So when we were inventing Across, we were trying to figure out a way to do it differently is what I'd add to your analysis here.

And the way we do it differently is, like you said, by adding this third party - the relayer - that actually just fills the user with their own funds, with their own money. And fills them very quickly with their own money. So quickly that it actually emulates atomicity. It's almost like this is an atomic fill for the user. And again, the reason why this is so quick and fast is because this relayer takes on that risk themselves.

And then what happens is a user gets the fast fill and then the relayer sits there and has to wait to get paid back. And the point that is interesting here, so we have a tradeoff in this design versus the other set of designs - this relayer has to make a loan. They're effectively loaning funds for, let's say, an hour.

But here's the brilliant thing, right at a 10% annualized interest rate, the cost of loaning funds for an hour works out to about a 10th of a basis point. It's like a really teeny, tiny number. And what we realized is that you can have this fast, great user experience because of this very short term loan. It doesn't really cost much. And it turns out that 10th of a base point cost can be saved by doing intelligent things, which we do with the Across protocol, to save gas in other ways. 

And so we kind of have this cool tradeoff where this relayer, by lending money, which seems like it could be costly, it turns out it's not that costly, and we're actually able to more than make up those costs by doing fun gas optimizations everywhere. So that's like the only other point that I've been harping on recently on how that works and I find it cool. 

KATHERINE

Well, I mean, we all love capital efficiency, so that is cool. Let me ask maybe more like digging into the design, specifically the design choices -aside from exchanges, I think bridges probably make up some of the biggest hacks in our industry, and if you just kind of read top 10-20 hacks, there's a big portion of it that are from bridges getting hacked. And I do think it's important to maybe delineate between what are the types of bridges that get hacked historically and then with what you guys are working on and as you think through design architecture for bridges, is there something as a truly safe bridge design?

HART

So Nick I'll take a first pass and then you can add or critique. So I'd make two points here. One point is that many of the bridge hacks are what we call lock and mint bridge designs, where you go and you deposit a native asset on the origin chain. So let's just say it’s ETH, I deposit ETH on Ethereum. And then I want to bridge a representation of ETH to a destination chain, like let's just take Solana. I want to bridge to Solana. And Solana has no native ETH, so I have a wrapped version of it. I have my bridge’s version of ETH on Solana. And again, in this design, we still have the deposit and then like magic in the middle happens to send a message to Solana and then Solana releases the wrapped ETH.

And this lock and mint design is super scary because it means that I'm sitting there with my money, my wrapped ETH, on Solana. But if that bridge architecture ever fails, there's this huge honeypot of money locked on the origin chain that can get drained. And some of the biggest hacks like Wormhole and Nomad - this is basically what happened.

So one source of bridge risk is this lock and mint design, which I don't want to speak for Nick, but I know he agrees, we don't like this. We don't like fake versions of tokens. We only want to deal with canonical versions. 

That brings me to my second point. In many chains there is a “canonical” way of sending messages or bridging tokens. So like Arbitrum and Optimism as examples have canonical bridges that are close - it's not almost always the same, but they're close to the same - trusting the canonical bridge is basically like trusting the chain itself. There's a little technical nuance in here, which is worth exploring, but not right here. But basically if you're using the canonical bridges, they're essentially as trustworthy as Arbitrum or Optimism themselves.

And so the answer to your question is, I think if you are only using canonical bridges, then you can have safe bridging. And if you're only using canonical representations of tokens, you can have safe bridging. The problem is that those canonical bridges are slow and expensive, and I'd argue that's a good thing. The reason why they're slow and expensive is because they're super secure - and we want that. That's like our primary thing. 

So the canonical bridges are slow and expensive, but that's where Across fits in, where we layer on this intent based bridging architecture, where we have these relayers that do these fast fills with their own money and then they effectively rebalance themselves using only the canonical bridges. And so we again figured out this sort of shortcut where we have this fast network of layers providing liquidity on top of the slow and secure canonical bridging network.

And I think, you know, I mean, I'm biased, this is our product, but I think that this is the right set of tradeoffs in this design. What do you think, Nick?

NICK

Yeah, I'd love to add to that. I think there are even more categories where you can gain security on bridges. And I definitely think the type of asset or the type of canonical bridges you're built on top of is probably the most important factor.

So back to the categories I mentioned earlier - asset type, where liquidity comes from, settlement and validation. Asset type is probably the most important, if you deal with just canonical assets and canonical bridges, you're as safe as you can be with bridges. 

Now, in terms of liquidity, there are roughly two types of bridges in terms of where the liquidity comes from. So either the liquidity is kept on all the destinations - you kind of have these like passive pools of AMMs that give out the funds at bridge time - or you can have just-in-time liquidity where market makers bring liquidity onchain in order to fulfill a deposit. We favor the second one. So if liquidity is coming from the relayer and the relayer has the choice of how they want to bring the liquidity onchain in order to fulfill a bridge that just seems a lot safer than keeping funds onchain where you have potential honeypots on each of the different chains. It's much better to have as few of these passive pools of liquidity as possible and leave it up to the relayer to determine how to bring liquidity onchain. 

In terms of settlement, two ways that I think bridges determine whether a bridge transfer should be settled or not is individually, so every single order is individually settled, or batched, so like over the past hour, there might have been 100 different bridge transfers from across all the different end chains. Individual settlement would settle each of those orders individually and batch would do a whole batch of them once per hour. The security tradeoff here is that if you were batch settling, there's usually only one transaction that proposes what the batch of settlements are. So this reduces the attack surface. If there's a bug, you don't have a bug in 100 different settlements. You just have a bug in that one batch settlement. 

And then in the last category, in terms of validation, how do you actually determine whether that settlement is valid or not? There are several mechanisms for this. There's optimistic, so someone proposes a batch settlement or a series of settlements, and then during some challenge period, someone can dispute that set of settlements. Another alternative would be validity proofs. This would require some zero knowledge technology and some contract on each chain that can know about consensus state on the different chains. I would say this technology is a bit early right now and a bit more theoretical, although it will eventually come to market. Or proof of authority - you kind of just have a trusted actor saying this is the state of things. Maybe it's a multisig, maybe it's a single EOA, and they kind of determine whether the settlement was correct or not. 

So obviously the proof of authority is the least secure, and that has led to some hacks where some of the bridge hacks were not actually contract bugs, they were just someone on the multisig who validates settlements, some of their keys were compromised.

I believe the safest currently would be optimistic validation because optimistic validation means that most of the validation logic is moved offchain. And then different chain actors can run the same or slightly different code and try to challenge each other. But this means that the validation logic can be upgraded very quickly offchain.  Eventually, I do think the safest will be validity proofs, but I just think that technology is a bit early right now.

But in summary, there are many different ways to gain security with bridges. And I think that the narrative right now of bridges being dangerous is just that there have been many primitive bridges built with tradeoffs that I think are trading off security in favor of speed and cost. 


KATHERINE
When I think about bridges, I kind of think of bridges as L2s. So I want to get at the question of shared sequencers and the tradeoffs, and people talk about shared sequencers in the context of L2. But in my mind, bridges are also L2s. And so how do you guys think through the tradeoffs with decentralized versus centralized sequencers?

HART

Okay, if we zoom way out, we have these different blockchains, and if you're within your own blockchain, it's a happy little universe where you're just doing your state transitions within your blockchain and everything is happy and good. Everything gets complicated when you try to tie these blockchains together. And so I think, Katherine, I very much agree with your statement, that bridges are L2s. This is now made blurry by the concept of maybe like a sovereign roll up, but like, let's not go there for a second — most L2s try to anchor themselves to a chain as their base source of security. So Optimism and Arbitrum, for example, they checkpoint their state into a Ethereum mainnet periodically and that basically means these two chains are tied in their own way. And you can think in a sense of that checkpoint as sort of like a very meta bridge transaction where we are bridging, we're creating a bridge, we're creating a link, between the roll up and Ethereum mainnet in this case.

And so in that sense, I do think that bridges are very related. Like you can tie these two concepts together. Again, this is kind of where I think the Across intent-based bridging architecture is pretty interesting, where the bridge between Optimism, for example, and Ethereum mainnet is slow, it just takes a while and it takes seven days the way it's designed, seven days to prove that this checkpoint is right. So what we're really doing here is, in a sense, Across is not a bridge, maybe? Maybe that's the way to think about it, where there are these canonical bridges that are tying things. And then Across is kind of like this system for relayers that kind of look like market makers to quickly do things between these two worlds, such that a user has a good experience, and then the third party after the relayer - and I'm abstracting this is like very generalized - but the third party relayer is the one that's been using the canonical official bridge to rebalance or to kind of validate that things happen. 

So I'm now ranting, but imagine you have two pools, two chains A and B, and they're their own happy universe and they're tied together with this slow but secure mechanism. And instead you want to offer another layer, this fast bridge layer, that is effectively aggregating user orders, limit orders or intents, whatever you want to call them. And this third party is providing a convenience function of letting the user quickly go between these two chains, while we ultimately go back and secure them on the default bridge. 

NICK
Yeah, it's almost like the sequencer’s main role is to periodically bring batches of transactions onchain. Just publishing like this is a set of transactions and allowing people to challenge them. But sequencers are concerned usually with generalized reduction. So within those transactions, some of them could be message transfers, some could be more specifically ERC-20 transfers. It's almost like bridges service specifically the ERC-20 transfers that happen somewhere and they publish some periodically onchain and they allow players to kind of cherry pick ERC-20 transfers that happen on some L2. And they allow a place for the relayer to say these transactions happened on the L2, I'm going to put money behind it, I'm going to front that user some money on this other chain and I'm going to request repayment from the bridge. So I guess in a general sense, sequencers and bridges are both concerned with making statements about what happened off the chain or whatever origin chain we're using as our perspective. And then bridges are specifically dealing with ERC-20 transfers. 

KATHERINE

You know, we care about speed, we care about capital efficiency, maybe we care about decentralization or that's all we talk about shared sequencers, but does that ironically affect speed? Is the denominator of slow speed basically the slowest shared sequencer? 

HART
Well, I think there's a lot of misconceptions here. Shared sequencers can get confused with decentralized sequencers, too. So instead of running your sequencer in a centralized way, you want to trust some protocol to run your sequencer. Okay, so we decentralize it. That's great. 

The shared sequencer has this promise of sort of composability between rollups, but that's not totally true. You can't atomically execute things between roll ups. You can atomically include them, but you can say, Hey, here's a set of transactions I want to do on chain A and here's a set of transactions I want to do on chain B, and a shared sequencer could atomically send those sets of transactions to both chains at effectively the same time, but it cannot guarantee that those sets of transactions both execute and don't revert. 

So this is me being a little nuanced on sort of some of the over-promises that shared sequencers have today. Could they get solved in the future? Maybe, I'm actually kind of doubtful. But my bigger point here is that shared sequencers don't solve our internal problems because we can't do atomic execution. They might solve for decreasing the latency between reading between different L2s. I think that there are some very real designs on how to do that. And what that would mean in our case is there would still be a need for a fast bridge if you want to do things very quickly between layer 2s, and I think that that is a very promising direction. So I'm actually really bullish on a lot of the shared sequencer stuff in terms of it supporting Across where Across can do right now sub one second L2-to-L2 transactions, but the relayer will be able to get paid back more quickly and will have increased capital efficiency if the relayer can get paid back, say within a few minutes. That just makes the cost of the relayer convenience that much lower. Does that make sense? 

NICK
Yeah. And shared sequencers also consolidate the trust amongst different chains. So one of the problems with fast bridges today is if you're trying to bridge from, say, let's say, Polygon to Optimism, they have very different rules about finality. So a transaction might take hours to be included on Polygon and might take days to be included on Optimism, and as a relayer who's bridging transactions between the two of you would need to be aware of how this works on both of them. Because you can't risk a transaction from either other of these chains reverting. And similarly on the destination chain, you also have to be aware of finality in terms of when you can get your funds back.

So if you were instead bridging transactions between two chains that shared a sequencer, it's just a bit easier as a relayer. Maybe this doesn't reduce complexity in your code, but it reduces complexity and your operational costs in terms of thinking and dealing with different chains. So it probably will be easier to relay between bridges with shared sequencers. And we see that today actually with Optimism and Base. And I don't know if they actually specifically share sequencer, but they have similar finality mechanisms. 

KATHERINE
What about - this is actually one of your questions Nick - what do you think the role of aggregators is within the bridging space? 

NICK
Yeah, I'll start that. From Across’s perspective, we've seen aggregators act as the tip of the spear in terms of being the first application that a lot of users use to bridge transactions between chains. And aggregators have a lot of leverage in the bridging space because they can focus a lot of efforts on UX and they will usually route orders to the bridges that are the cheapest and the fastest to deliver and the whole bridging UI, if it exists, is abstracted from the user in many cases. So I think aggregators are very important and they pressure bridges to compete on, so far, speed, cost and security. 

HART
I’ll add to that Nick. I’m not sure if this is contrarian to what you're saying, because obviously aggregators are something Across is really embraced and we really enjoy working with them because they're a pretty cool showcase for our product where right now in aggregators, Across usually shows up as the cheapest and fastest by large margin. So they're like a very cool way to showcase how our approach to interop works. 

But if I project out into the future, I see there being this structure in this intents-based bridging architecture. So I'm super bullish that this type of intents-based bridging architecture is the future of where stuff is going and I look at it now there being a world of three layers here. There's user order generation, so like an RFQ or aggregator sits at the top layer generating these orders. There is this middle layer of relayers or solvers or searchers or whatever you want to call them that are fast filling these users with their own capital. And then the bottom layer is like the settlement layer where those user intents are getting settled, user funds are basically getting escrowed until we verify that the middle layer did its job.

And so projecting this out, I look at aggregators as a source of order flow for this intent-based bridging architecture much in the same way that the Across front-end, Across.to for example, is a source of order flow in the same way that like UniswapX would be a source of order flow or 1inch Fusion or CoW Swap.

And so we have this world where orders are being generated by users - and these limit orders are what people call intents - and they could be cross-chain. And then we have this hyper competitive solver layer ecosystem to fill them. And then you have settlement layers at the bottom. And I actually think in this architecture of the Across fast bridge, you can actually re-imagine it as this settlement layer that is supporting order flow no matter where it comes from, including coming from aggregators. We're supporting order flow to help settle these fast bridge transactions.

NICK
Yeah, so a bridge just offers, I guess, fundamentally a validation mechanism for settlements - settlements of transfers between different chains. And then the bridge can be agnostic about how those transfers actually get signaled. They could be signaled from aggregators, they could be signaled from across chain decks like UniswapX. They could come directly from the Across front-end right now or any bridge front-ends. But yeah, I do think the way the interoperability industry is moving is that bridges are going to move more towards the settlement layer. 

KATHERINE
Okay. Last question. Last question. And this is bridges, but more generally, L2s. If you had to pick one metric for success L2s, what would it be? And the reason why I'm asking this is because we're recording this on November 22nd, 2023, and yesterday I saw an announcement of a a really hot L2 that everyone is talking about called Blast. And it's like super controversial. And if I were to just take that perspective, it would seem like a metric for success would be like ETH deposited. And I think a lot of people had issues with how they were achieving that. And so that got me thinking - if you were thinking through designing an L2, what is your metric for success? 

HART
I mean, I think for me it's transaction fees. Like what are people paying to use your L2? It's kind of interesting because the whole point of an L2 is you want to make really low transaction fees. But I mean, in aggregate, what is the total amount of revenue that your network is generating? Because that's literally what people are paying for. 

So TVL and all that matters, but I think you want people paying to use the service, and the more people are paying to use the service in aggregate, the more service you're providing. 

NICK
Yeah, I think that's really good. I think volume and TVL, TVL especially, is a bit overrated. At the end of the day, it does come down to fees, although I guess you could think about volume and TVL as potential future fees, but I think it is hard to increase fees over time. 

KATHERINE
Yeah, super fair. Super fair. Where can people go and follow the work that you guys do at Across?

HART
So you can go check out our website Across.to, and on Twitter it's @acrossprotocol. For me personally, I'm @hal2001 and I'm trying to write increasingly spicy takes on the future of interop. So happy to entertain you there. What about you, Nick? 

NICK
I am on Twitter at @mountainwaterpi. And also Hart’s Twitter is really good and you should read it and it is really spicy, but I think it should open up your mind about thinking about L2s. 


HART
Thanks Nick. 

KATHERINE
Yeah. Plus one on the spicier taste coming from Hart lately. Good to see it. All right, guys, thanks for taking the time today. 

HART
Thanks, Katherine. Thank you for having us.

NICK

Thanks, Katherine. Thanks, Hart.

accelerating the decentralized future

S4:E4 Hart Lambur of Across Protocol - Intents and the Future of Interop

December 13, 2023
 | 
30:13
 | 
S4:E4

On this episode of Archebyte, Katherine Wu and fellow Archetype team member, Nick Pai, are joined by the CEO and co-founder of Risk Labs, Hart Lambur, to talk bridges, L2s, and interoperability.

Hart and Risk Labs are best known for creating UMA, an optimistic oracle, and Across Protocol, a crosschain bridge. In the multichain future that is quickly becoming the present, infrastructure like oracles, bridges, and sequencers play crucial roles. But, these tools that are meant to facilitate interoperability between blockchains still have plenty of room for improvement.

During our conversation, we dig into the problems that bridges currently face, how Across is designed to mitigate them, and what a safer multichain ecosystem looks like. Hart and Nick give their thoughts on centralized, decentralized, and shared sequencers, intents, and the role aggregators play in regards to bridges. They also answer the loaded question - are bridges L2s?

📬 To keep up with the latest from Archebyte and receive articles and research from the rest of the Archetype team, subscribe to our newsletter: http://eepurl.com/iCApL2

- - - - - - - -

TIMESTAMPS

0:00 Intro

2:10  The bridging problem

5:30 How Across works

8:15 Making safe bridges

15:31 L2s and centralized vs decentralized sequencers

20:05 Sequencer misconceptions?

23:35 Aggregators in bridging

27:03 L2 metrics for success

👋 FOLLOW US

Hart: https://twitter.com/hal2001 

Nick: https://twitter.com/mountainwaterpi 

UMA: https://uma.xyz/
Across: https://across.to/ 

Archetype: https://twitter.com/archetypevc

Katherine: https://twitter.com/katherineykwu 

🌐 LINKS

UMA: https://uma.xyz/
Across: https://across.to/ 

Risk Labs: https://risklabs.foundation/

📜 TRANSCRIPT

KATHERINE

Hello everyone, and welcome back to Archebyte. Every other week on Archebyte we have on some of the smartest builders and founders in the crypto industry to tell us what's top of mind for them. Today we are joined by Hart Lambur, who is the CEO and co-founder of Risk Labs. Risk Labs is most known for launching the UMA Protocol, as well as the Across Protocol.

We also have a special guest today along with Hart - Nik Pai - who joined Risk Labs in 2020 and was an early engineer for UMA Protocol and now a tech lead at Across. He is also a research partner with me at Archetype and we are very lucky to have his perspective within the firm. Now, without further ado, today's episode is about all things bridges, L2, and interop, from the taxonomy to the design choices to Nick and Hart's vision of the interop world. So welcome to the show, Nick and Hart. 

HART

Thanks, Katherine. It's going to be fun. 

NICK

Thanks, Katherine.

KATHERINE

It's going to be very fun. It's going to be very nerdy, but very fun. Now, to kick us off, actually, I'm going to turn this first question over to Nick. Give us a very high level brief overview of the bridging architecture taxonomy to start.

NICK

Yeah, so I will set the stage and I'll describe the bridging problem briefly. And then I want to propose a framework for thinking about the solutions to this problem and then hand it over to Hart.

So today, in 2023, blockchains are definitively multichain. The multichain future that we've all been talking about has arrived at this point. Many of these chains are isolated from each other, and those that are connected are either connected insecurely or securely, but slowly.

So there's this new set of applications that define the interoperability industry that try to service users who want to transfer value and information quickly, securely and permissionlessly. This is a really hard problem, as we've seen. 

The first bit about transferring information quickly is really hard because you can't really be confident about the state on the origin chain before you send it to the destination - Until the origin chain information has finalized. And usually finality can take several hours to several days.

And transferring messages securely is also really hard because you need to make sure that the message that the user wants to transfer from one chain to another is not subject to censorship by a central party. In other words, the sender should always be able to fast forward their own message to the destination.

And I think this presents a really hard UX problem in that it's actually a UX sensation in that sending a transaction of one chain to another feels magical and it's very scary to see your transaction occur on one chain and then to have to wait for it to appear in the other chain. Somewhere there ideally should be a trace of transactions from the origin to the destination such that the user can always feel secure such that they can recover their message or revert it if something happens. And if this trace of transactions from the origin chain to the destination ever disappears, it feels scary, and there's probably a trusted actor somewhere. 

So what Hart and I work on is something called a fast bridge, and fast bridges are essentially risk transfers to users. Fast bridges introduce a third party called relayer or filler, who takes on the risk of finality on the origin chain and forwards messages really quickly to the user on the destination chain.

So the relayer is taking on all of this risk and the user is going to pay them for the privilege of transferring their message across. And so the whole challenge of bridges is determining whether this relayer acted correctly and determining whether to pay them a reward. And I think there are roughly four categories, and I'll just describe them at a very high level and we can get into details.

But I think the four categories are asset type — so what kind of message is being transferred? Is it a token or just an arbitrary message? Settlement — how does the bridge actually refund or reward relayers? Is this done individually or in batch settlements? Validation — so how does the bridge validate that the bridge was transferred correctly - optimistically, validity proofs, proof of authority? And finally, for token bridges — which are a subset of bridges that Hart and I really care about — where does liquidity come from on the destination chain. So is it held passively onchain or is it brought onchain just in time from market makers? 

So I think that's how I view the four categories. I’m kind of curious if Hart has a different framework for that.

HART

Yeah, I mean, Nick you and I work together, so we probably agree a lot, and I do. I think what I'd emphasize and add to your analysis here is just that most bridges follow this paradigm. They aren't this intense space bridging architecture that we work on at Across. 

So maybe to describe like the other bridges, it's basically deposit asset on origin chain, and then something happens, some magic middle thing happens where a message is sent from origin to destination, and then funds are released on destination chain.

And like you said, there are problems here where that something magic in the middle happening - if you're sending a message - it's costly, like even just gas costs, or it's insecure, or it takes time. So when we were inventing Across, we were trying to figure out a way to do it differently is what I'd add to your analysis here.

And the way we do it differently is, like you said, by adding this third party - the relayer - that actually just fills the user with their own funds, with their own money. And fills them very quickly with their own money. So quickly that it actually emulates atomicity. It's almost like this is an atomic fill for the user. And again, the reason why this is so quick and fast is because this relayer takes on that risk themselves.

And then what happens is a user gets the fast fill and then the relayer sits there and has to wait to get paid back. And the point that is interesting here, so we have a tradeoff in this design versus the other set of designs - this relayer has to make a loan. They're effectively loaning funds for, let's say, an hour.

But here's the brilliant thing, right at a 10% annualized interest rate, the cost of loaning funds for an hour works out to about a 10th of a basis point. It's like a really teeny, tiny number. And what we realized is that you can have this fast, great user experience because of this very short term loan. It doesn't really cost much. And it turns out that 10th of a base point cost can be saved by doing intelligent things, which we do with the Across protocol, to save gas in other ways. 

And so we kind of have this cool tradeoff where this relayer, by lending money, which seems like it could be costly, it turns out it's not that costly, and we're actually able to more than make up those costs by doing fun gas optimizations everywhere. So that's like the only other point that I've been harping on recently on how that works and I find it cool. 

KATHERINE

Well, I mean, we all love capital efficiency, so that is cool. Let me ask maybe more like digging into the design, specifically the design choices -aside from exchanges, I think bridges probably make up some of the biggest hacks in our industry, and if you just kind of read top 10-20 hacks, there's a big portion of it that are from bridges getting hacked. And I do think it's important to maybe delineate between what are the types of bridges that get hacked historically and then with what you guys are working on and as you think through design architecture for bridges, is there something as a truly safe bridge design?

HART

So Nick I'll take a first pass and then you can add or critique. So I'd make two points here. One point is that many of the bridge hacks are what we call lock and mint bridge designs, where you go and you deposit a native asset on the origin chain. So let's just say it’s ETH, I deposit ETH on Ethereum. And then I want to bridge a representation of ETH to a destination chain, like let's just take Solana. I want to bridge to Solana. And Solana has no native ETH, so I have a wrapped version of it. I have my bridge’s version of ETH on Solana. And again, in this design, we still have the deposit and then like magic in the middle happens to send a message to Solana and then Solana releases the wrapped ETH.

And this lock and mint design is super scary because it means that I'm sitting there with my money, my wrapped ETH, on Solana. But if that bridge architecture ever fails, there's this huge honeypot of money locked on the origin chain that can get drained. And some of the biggest hacks like Wormhole and Nomad - this is basically what happened.

So one source of bridge risk is this lock and mint design, which I don't want to speak for Nick, but I know he agrees, we don't like this. We don't like fake versions of tokens. We only want to deal with canonical versions. 

That brings me to my second point. In many chains there is a “canonical” way of sending messages or bridging tokens. So like Arbitrum and Optimism as examples have canonical bridges that are close - it's not almost always the same, but they're close to the same - trusting the canonical bridge is basically like trusting the chain itself. There's a little technical nuance in here, which is worth exploring, but not right here. But basically if you're using the canonical bridges, they're essentially as trustworthy as Arbitrum or Optimism themselves.

And so the answer to your question is, I think if you are only using canonical bridges, then you can have safe bridging. And if you're only using canonical representations of tokens, you can have safe bridging. The problem is that those canonical bridges are slow and expensive, and I'd argue that's a good thing. The reason why they're slow and expensive is because they're super secure - and we want that. That's like our primary thing. 

So the canonical bridges are slow and expensive, but that's where Across fits in, where we layer on this intent based bridging architecture, where we have these relayers that do these fast fills with their own money and then they effectively rebalance themselves using only the canonical bridges. And so we again figured out this sort of shortcut where we have this fast network of layers providing liquidity on top of the slow and secure canonical bridging network.

And I think, you know, I mean, I'm biased, this is our product, but I think that this is the right set of tradeoffs in this design. What do you think, Nick?

NICK

Yeah, I'd love to add to that. I think there are even more categories where you can gain security on bridges. And I definitely think the type of asset or the type of canonical bridges you're built on top of is probably the most important factor.

So back to the categories I mentioned earlier - asset type, where liquidity comes from, settlement and validation. Asset type is probably the most important, if you deal with just canonical assets and canonical bridges, you're as safe as you can be with bridges. 

Now, in terms of liquidity, there are roughly two types of bridges in terms of where the liquidity comes from. So either the liquidity is kept on all the destinations - you kind of have these like passive pools of AMMs that give out the funds at bridge time - or you can have just-in-time liquidity where market makers bring liquidity onchain in order to fulfill a deposit. We favor the second one. So if liquidity is coming from the relayer and the relayer has the choice of how they want to bring the liquidity onchain in order to fulfill a bridge that just seems a lot safer than keeping funds onchain where you have potential honeypots on each of the different chains. It's much better to have as few of these passive pools of liquidity as possible and leave it up to the relayer to determine how to bring liquidity onchain. 

In terms of settlement, two ways that I think bridges determine whether a bridge transfer should be settled or not is individually, so every single order is individually settled, or batched, so like over the past hour, there might have been 100 different bridge transfers from across all the different end chains. Individual settlement would settle each of those orders individually and batch would do a whole batch of them once per hour. The security tradeoff here is that if you were batch settling, there's usually only one transaction that proposes what the batch of settlements are. So this reduces the attack surface. If there's a bug, you don't have a bug in 100 different settlements. You just have a bug in that one batch settlement. 

And then in the last category, in terms of validation, how do you actually determine whether that settlement is valid or not? There are several mechanisms for this. There's optimistic, so someone proposes a batch settlement or a series of settlements, and then during some challenge period, someone can dispute that set of settlements. Another alternative would be validity proofs. This would require some zero knowledge technology and some contract on each chain that can know about consensus state on the different chains. I would say this technology is a bit early right now and a bit more theoretical, although it will eventually come to market. Or proof of authority - you kind of just have a trusted actor saying this is the state of things. Maybe it's a multisig, maybe it's a single EOA, and they kind of determine whether the settlement was correct or not. 

So obviously the proof of authority is the least secure, and that has led to some hacks where some of the bridge hacks were not actually contract bugs, they were just someone on the multisig who validates settlements, some of their keys were compromised.

I believe the safest currently would be optimistic validation because optimistic validation means that most of the validation logic is moved offchain. And then different chain actors can run the same or slightly different code and try to challenge each other. But this means that the validation logic can be upgraded very quickly offchain.  Eventually, I do think the safest will be validity proofs, but I just think that technology is a bit early right now.

But in summary, there are many different ways to gain security with bridges. And I think that the narrative right now of bridges being dangerous is just that there have been many primitive bridges built with tradeoffs that I think are trading off security in favor of speed and cost. 


KATHERINE
When I think about bridges, I kind of think of bridges as L2s. So I want to get at the question of shared sequencers and the tradeoffs, and people talk about shared sequencers in the context of L2. But in my mind, bridges are also L2s. And so how do you guys think through the tradeoffs with decentralized versus centralized sequencers?

HART

Okay, if we zoom way out, we have these different blockchains, and if you're within your own blockchain, it's a happy little universe where you're just doing your state transitions within your blockchain and everything is happy and good. Everything gets complicated when you try to tie these blockchains together. And so I think, Katherine, I very much agree with your statement, that bridges are L2s. This is now made blurry by the concept of maybe like a sovereign roll up, but like, let's not go there for a second — most L2s try to anchor themselves to a chain as their base source of security. So Optimism and Arbitrum, for example, they checkpoint their state into a Ethereum mainnet periodically and that basically means these two chains are tied in their own way. And you can think in a sense of that checkpoint as sort of like a very meta bridge transaction where we are bridging, we're creating a bridge, we're creating a link, between the roll up and Ethereum mainnet in this case.

And so in that sense, I do think that bridges are very related. Like you can tie these two concepts together. Again, this is kind of where I think the Across intent-based bridging architecture is pretty interesting, where the bridge between Optimism, for example, and Ethereum mainnet is slow, it just takes a while and it takes seven days the way it's designed, seven days to prove that this checkpoint is right. So what we're really doing here is, in a sense, Across is not a bridge, maybe? Maybe that's the way to think about it, where there are these canonical bridges that are tying things. And then Across is kind of like this system for relayers that kind of look like market makers to quickly do things between these two worlds, such that a user has a good experience, and then the third party after the relayer - and I'm abstracting this is like very generalized - but the third party relayer is the one that's been using the canonical official bridge to rebalance or to kind of validate that things happen. 

So I'm now ranting, but imagine you have two pools, two chains A and B, and they're their own happy universe and they're tied together with this slow but secure mechanism. And instead you want to offer another layer, this fast bridge layer, that is effectively aggregating user orders, limit orders or intents, whatever you want to call them. And this third party is providing a convenience function of letting the user quickly go between these two chains, while we ultimately go back and secure them on the default bridge. 

NICK
Yeah, it's almost like the sequencer’s main role is to periodically bring batches of transactions onchain. Just publishing like this is a set of transactions and allowing people to challenge them. But sequencers are concerned usually with generalized reduction. So within those transactions, some of them could be message transfers, some could be more specifically ERC-20 transfers. It's almost like bridges service specifically the ERC-20 transfers that happen somewhere and they publish some periodically onchain and they allow players to kind of cherry pick ERC-20 transfers that happen on some L2. And they allow a place for the relayer to say these transactions happened on the L2, I'm going to put money behind it, I'm going to front that user some money on this other chain and I'm going to request repayment from the bridge. So I guess in a general sense, sequencers and bridges are both concerned with making statements about what happened off the chain or whatever origin chain we're using as our perspective. And then bridges are specifically dealing with ERC-20 transfers. 

KATHERINE

You know, we care about speed, we care about capital efficiency, maybe we care about decentralization or that's all we talk about shared sequencers, but does that ironically affect speed? Is the denominator of slow speed basically the slowest shared sequencer? 

HART
Well, I think there's a lot of misconceptions here. Shared sequencers can get confused with decentralized sequencers, too. So instead of running your sequencer in a centralized way, you want to trust some protocol to run your sequencer. Okay, so we decentralize it. That's great. 

The shared sequencer has this promise of sort of composability between rollups, but that's not totally true. You can't atomically execute things between roll ups. You can atomically include them, but you can say, Hey, here's a set of transactions I want to do on chain A and here's a set of transactions I want to do on chain B, and a shared sequencer could atomically send those sets of transactions to both chains at effectively the same time, but it cannot guarantee that those sets of transactions both execute and don't revert. 

So this is me being a little nuanced on sort of some of the over-promises that shared sequencers have today. Could they get solved in the future? Maybe, I'm actually kind of doubtful. But my bigger point here is that shared sequencers don't solve our internal problems because we can't do atomic execution. They might solve for decreasing the latency between reading between different L2s. I think that there are some very real designs on how to do that. And what that would mean in our case is there would still be a need for a fast bridge if you want to do things very quickly between layer 2s, and I think that that is a very promising direction. So I'm actually really bullish on a lot of the shared sequencer stuff in terms of it supporting Across where Across can do right now sub one second L2-to-L2 transactions, but the relayer will be able to get paid back more quickly and will have increased capital efficiency if the relayer can get paid back, say within a few minutes. That just makes the cost of the relayer convenience that much lower. Does that make sense? 

NICK
Yeah. And shared sequencers also consolidate the trust amongst different chains. So one of the problems with fast bridges today is if you're trying to bridge from, say, let's say, Polygon to Optimism, they have very different rules about finality. So a transaction might take hours to be included on Polygon and might take days to be included on Optimism, and as a relayer who's bridging transactions between the two of you would need to be aware of how this works on both of them. Because you can't risk a transaction from either other of these chains reverting. And similarly on the destination chain, you also have to be aware of finality in terms of when you can get your funds back.

So if you were instead bridging transactions between two chains that shared a sequencer, it's just a bit easier as a relayer. Maybe this doesn't reduce complexity in your code, but it reduces complexity and your operational costs in terms of thinking and dealing with different chains. So it probably will be easier to relay between bridges with shared sequencers. And we see that today actually with Optimism and Base. And I don't know if they actually specifically share sequencer, but they have similar finality mechanisms. 

KATHERINE
What about - this is actually one of your questions Nick - what do you think the role of aggregators is within the bridging space? 

NICK
Yeah, I'll start that. From Across’s perspective, we've seen aggregators act as the tip of the spear in terms of being the first application that a lot of users use to bridge transactions between chains. And aggregators have a lot of leverage in the bridging space because they can focus a lot of efforts on UX and they will usually route orders to the bridges that are the cheapest and the fastest to deliver and the whole bridging UI, if it exists, is abstracted from the user in many cases. So I think aggregators are very important and they pressure bridges to compete on, so far, speed, cost and security. 

HART
I’ll add to that Nick. I’m not sure if this is contrarian to what you're saying, because obviously aggregators are something Across is really embraced and we really enjoy working with them because they're a pretty cool showcase for our product where right now in aggregators, Across usually shows up as the cheapest and fastest by large margin. So they're like a very cool way to showcase how our approach to interop works. 

But if I project out into the future, I see there being this structure in this intents-based bridging architecture. So I'm super bullish that this type of intents-based bridging architecture is the future of where stuff is going and I look at it now there being a world of three layers here. There's user order generation, so like an RFQ or aggregator sits at the top layer generating these orders. There is this middle layer of relayers or solvers or searchers or whatever you want to call them that are fast filling these users with their own capital. And then the bottom layer is like the settlement layer where those user intents are getting settled, user funds are basically getting escrowed until we verify that the middle layer did its job.

And so projecting this out, I look at aggregators as a source of order flow for this intent-based bridging architecture much in the same way that the Across front-end, Across.to for example, is a source of order flow in the same way that like UniswapX would be a source of order flow or 1inch Fusion or CoW Swap.

And so we have this world where orders are being generated by users - and these limit orders are what people call intents - and they could be cross-chain. And then we have this hyper competitive solver layer ecosystem to fill them. And then you have settlement layers at the bottom. And I actually think in this architecture of the Across fast bridge, you can actually re-imagine it as this settlement layer that is supporting order flow no matter where it comes from, including coming from aggregators. We're supporting order flow to help settle these fast bridge transactions.

NICK
Yeah, so a bridge just offers, I guess, fundamentally a validation mechanism for settlements - settlements of transfers between different chains. And then the bridge can be agnostic about how those transfers actually get signaled. They could be signaled from aggregators, they could be signaled from across chain decks like UniswapX. They could come directly from the Across front-end right now or any bridge front-ends. But yeah, I do think the way the interoperability industry is moving is that bridges are going to move more towards the settlement layer. 

KATHERINE
Okay. Last question. Last question. And this is bridges, but more generally, L2s. If you had to pick one metric for success L2s, what would it be? And the reason why I'm asking this is because we're recording this on November 22nd, 2023, and yesterday I saw an announcement of a a really hot L2 that everyone is talking about called Blast. And it's like super controversial. And if I were to just take that perspective, it would seem like a metric for success would be like ETH deposited. And I think a lot of people had issues with how they were achieving that. And so that got me thinking - if you were thinking through designing an L2, what is your metric for success? 

HART
I mean, I think for me it's transaction fees. Like what are people paying to use your L2? It's kind of interesting because the whole point of an L2 is you want to make really low transaction fees. But I mean, in aggregate, what is the total amount of revenue that your network is generating? Because that's literally what people are paying for. 

So TVL and all that matters, but I think you want people paying to use the service, and the more people are paying to use the service in aggregate, the more service you're providing. 

NICK
Yeah, I think that's really good. I think volume and TVL, TVL especially, is a bit overrated. At the end of the day, it does come down to fees, although I guess you could think about volume and TVL as potential future fees, but I think it is hard to increase fees over time. 

KATHERINE
Yeah, super fair. Super fair. Where can people go and follow the work that you guys do at Across?

HART
So you can go check out our website Across.to, and on Twitter it's @acrossprotocol. For me personally, I'm @hal2001 and I'm trying to write increasingly spicy takes on the future of interop. So happy to entertain you there. What about you, Nick? 

NICK
I am on Twitter at @mountainwaterpi. And also Hart’s Twitter is really good and you should read it and it is really spicy, but I think it should open up your mind about thinking about L2s. 


HART
Thanks Nick. 

KATHERINE
Yeah. Plus one on the spicier taste coming from Hart lately. Good to see it. All right, guys, thanks for taking the time today. 

HART
Thanks, Katherine. Thank you for having us.

NICK

Thanks, Katherine. Thanks, Hart.

Expand to view full transcript
Collapse to smaller transcript view
accelerating the decentralized future
we strive towards the ideal. are you with us?