In the first two parts of this series, we focused on the technical issues that arise when splitting the stack, and the improvements that need to be made in the modular world. We've covered a number of work developments to address issues that naturally arise in cross-domain setups. However, in the final part of the series, we want to focus more on user experience. We wanted to look at how modularization, customization, and specialization can help create better applications. The final chapter of this series will look at the exciting and unique creativity and possibilities in modularity for developers to create Web2 user experiences with Web3 verifiability.
The reason behind building modularity shouldn't just be to cater to the narrative or just to be modular, but because it allows us to build better, more efficient, and more customizable applications. When building modular and specialized systems, a number of unique features emerge. Some of them are obvious, while others are less obvious. Therefore, our goal is to provide an overview of the capabilities of the modular system that you didn't know about, such as scalability.
We believe that one of the capabilities that modularity provides to developers is the ability to build highly customizable, professional applications that lead to a better experience for end users. We've previously discussed the ability to set rules or reorder the order in which trades are executed.
Verifiable collations (hereafter referred to as VSRs) are one of the interesting opportunities offered by controlled sorting, especially for developers interested in building "fairer" trading systems in terms of execution. Obviously, the relationship between liquidity provider losses and rebalancing (LVR) is beyond the scope of this article, so we will avoid touching on too much of this knowledge. Keep in mind that the settings we're going to explain are primarily for the AMM and not for the order book model. In addition, CLOB (and even CEX) will also benefit greatly from leveraging verifiable collations that fit their specific settings. In an off-chain setup, there is a clear need for some concept of zero-knowledge or optimistic execution supported by cryptoeconomic security.
VSR is particularly interesting when we consider the fact that the majority of retail investors have not yet (or are unlikely) adopted a conservation approach. Most wallets/DEXs also don't implement private mempools, RPCs, or similar methods. Most transactions are submitted directly through the frontend (whether it's an aggregator or a DEX frontend). As a result, unless the application directly interferes with its processes and the way orders are processed, the end user may not be able to see the best execution results.
When we consider where the supply chain of transactions lies, the role of VSR becomes obvious. It sits where professional participants sort (or contain) transactions, usually based on some auction or base fee. This sequencing is very important because it determines which trades are executed and when. Essentially, the person who has the sorting authority has the ability to extract the MEV, usually in the form of a priority fee (or tip).
As a result, it can be interesting to write rules on how to handle sorting in order to provide fairer trade execution (in a DEX setup) for end users. However, if you're building a general-purpose network, you should try to avoid following such rules.
Also, there are some MEVs that are important, like arbitrage, liquidation, etc. One idea is to create a "highway" channel at the top of the block, specifically targeting arbitrageurs and liquidators on the whitelist, who pay higher fees and share a portion of the revenue with the protocol.
In the paper, "Designing Trusted Decentralized Exchanges with Verifiable Collations," Matheus V., X. Ferreira, and David C. Parkes propose a model in which a block's sequencer is subject to a series of constraints (and those constraints are verifiable). Without adhering to the set rules, the observer can generate proof of the failure (or since the constraints are mathematically verifiable, you can also imagine a ZK circuit with these constraints, which uses ZKP as a validity proof). The main idea is essentially to provide the end-user (trader) with a guarantee of execution price. This guarantee ensures that the transaction is executed at as good as the only transaction in the block (obviously, if we assume a buy/sell/buy/sell order based on a first-come, first-served basis, there is a certain amount of delay involved here). The basic idea of the proposals in the paper is that if they perform at a better price than what is available at the top of the block, these collations will restrict the builder (in a PBS scenario) or the sequencer from only including transactions in the same direction (say sell/sell). In addition, if there is a situation where you make a sell at the end of a series of purchases, then the sell will not be executed (e.g., buy, buy, buy, sell), which may indicate that searchers (or builders/sequencers) are using these purchases to push the price in their favor. This essentially means that the protocol rules guarantee that users will not be used to offer a better price (i.e., MEV) to someone else, or to cause the price to drop due to priority fees. Obviously, the flaw of the rule here (in the case of selling more than buying, and vice versa) is that you may get a relatively poor long-tail price.
It's nearly impossible for a general smart contract platform to have these rules as purely on-chain construction rules, as you have no control over execution and ordering. At the same time, you're competing with many others, so trying to force those at the top of the block to pay a priority fee would be unnecessarily expensive. One of the features of the modular setup is that it allows application developers to customize how their execution environment should behave. Whether it's collation, using a different VM, or making custom changes to an existing VM, such as adding a new opcode or changing the gas limit, it really depends on the developer, and it really depends on their product.
In the case of a rollup using data availability, consensus layer, and liquidity settlement layer, the possible settings are as follows:
Another possible idea is transaction splitting. Imagine a pool of transactions, how to execute large order transactions (which cause a lot of slippage) and if this transaction is executed across consecutive blocks (or at the end of the block if VSR is compliant), is that fair to the end user?
If the end user is concerned about latency, then that user may not want his order to be split. However, this is less common, and optimizing for trade splitting of larger orders may result in more efficient execution for the vast majority of users. Either way, one concern is that MEV searchers may become aware of these sequential trades and try to position their trades before or after said traders. However, the total value of the extracted MEV may be much smaller due to the small split transactions on a series of blocks.
Another interesting idea we mentioned earlier in the post is to use frequent bulk auctions (FBA), advocated by the legendary Eric Budish and his colleagues, to process transactions in a bulk auction fashion rather than a serial fashion. This is to help identify overlapping demand (CoW) and integrate arbitrage opportunities into the design of market mechanisms. This also helps to "fight" deferred games in continuous block construction (or priority fee battles in serial blocks). Thanks to Michael Jordan (DBA) for bringing this paper to our attention and for his work on mitigating Latency Roast. Implementing it as part of Rollup's fork selection and collation is also an interesting setup that developers can use, and we've seen its significant traction over the past year, especially for Penumbra and CoWSwap. One possible setup would look like this:
In this setup, there is no first-come, first-served or priority gas fee war, but rather a block end batch auction based on cumulative orders in the time between each block.
In general, in a world where the majority of transactions have moved to a non-custodial "on-chain" world, FBA may be one of the more efficient ways to "real" price discovery, depending on block time. Leveraging FBA also means that since all bulk orders are bulk and won't be revealed until the auction ends (assuming there are some crypto setups), there will be a significant reduction in front-running trades. A uniform settlement price is the key here, as it doesn't make sense to reorder transactions.
It's also important to note that back in 2018, designs like the ones we just covered were discussed on the Ethresear.ch forum (see here). In the post, they mention two papers that offer a bulk auction mechanism on Plasma (a bit like a prequel to modern Rollups) in which each batch accepts orders to buy additional ERC20 tokens at a certain maximum limit price. These orders are collected at certain intervals and provide a uniform settlement price for all token trading pairs. The overall idea behind this model is that it will help eliminate the phenomenon of front-running that is common in popular AMMs.
Another important thing to note is that in these setups, the sequencer may need some incentives to enforce (and enforce) the above rules. This is often overlooked, but much of the infrastructure of the blockchain network is run by specialized companies, and its cost is quite different from that of the average household participant. In general, incentives are an important part of security infrastructure implementation. Sequencers and builders are also more likely to make a greater effort when the incentives are aligned with the rules enforced. This means that these setups should also have an active market. Clearly, this type of market is becoming more centralized, as the cost of capital for specialization can be high. As a result, the smartest (and wealthiest) people are likely to integrate and specialize in order to capture as much value as possible. Here, the exclusivity order flow may be an arrow on the knee for some participants, resulting in an increase in centralization. A general benchmark fee may be sufficient, but it doesn't really push the ranking participants towards specialization. As a result, you may want to introduce some concepts that make traders happy with the results through incentives that are appropriate for your particular situation.
This is clear to most people, but it still needs to be mentioned when discussing the ordering of Rollup levels. If you can control the ordering, it will be easier to "extract" the value of the protocol. This is because you control the power to reorder transactions, which is usually based on priority fees (MEV-boost-esque settings) on most L1s. It provides you with priority fees paid by complex participants who extract value on-chain. These participants are usually willing to pay a considerable amount (until it is no longer able to provide value). However, most rollups currently operate on a first-come, first-served basis. Most MEV extractions are carried out through delayed wars, which puts a serious strain on the Rollup infrastructure. As a result of the above, we are likely to see more and more rollups starting to implement a sorting structure with the concept of priority fees (e.g., Arbitrum's time-enhancing mechanism).
Another example we like is Uniswap. Currently, Uniswap as a protocol "creates" a lot of inefficiencies. These inefficiencies are exploited by participants seeking to extract MEV (arbitrage, at the expense of liquidity providers). At the same time, these participants pay a lot of fees to extract value, but none of that value falls into the hands of the Uniswap protocol or its token holders. Instead, a significant portion of this extracted value is paid a priority fee to Ethereum proposers (validators) via MEV-Boost to gain the rights to be included in a block that allows for capturing value at some point. So, while there are plenty of MEV opportunities for Uniswap order flows, none of them are captured by Uniswap.
If Uniswap is able to control ordering within the protocol (and the ability to extract priority fees from searchers), it could be commercialized and perhaps even pay some of these profits to token holders, liquidity providers, or others. With changes to Uniswap (e.g., UniswapX, etc.) moving to off-chain execution (and Ethereum as the settlement layer), this mechanism looks increasingly likely.
If we assume a rollup with a partial PBS mechanism, the order flow and commercialization process might look like this:
As a result, the commercialization of rollup sequencers and proposers may follow the following formula:
Issuance (PoS) + Fee Income (+priority) - The cost of DA, state pub, storage
A good way to see how much value is currently being withdrawn on Ethereum (especially arbitrage) can be found on Mevboost.pics, which gives a good overview of how much value can actually be extracted from inefficiencies.
In addition, decoupling the priority fee gas war from the off-chain structure can help contain supply chain disruptions by isolating MEV extraction into the execution environment. However, considering that if the leader election takes place on a rollup, the majority of the MEV will be drawn on the rollup, which leaves little room for the underlying structure unless the DA layer is included, the settlement layer's priority fees come from liquidity consolidation, or other economies of scale.
To clarify, many of these structures can function as purely off-chain structures without any verification bridges or strong security guarantees. However, there are some trade-offs that have to be made there. We're starting to see more of these things popping up, both existing and invisible. One thing I want to point out is that modularity doesn't necessarily mean rollups.
The collation above represents an example where fine-tuning infrastructure can dramatically improve the application built on top of it.