Revisit Native Rollups
  1. Introduction

  2. What is Native Rollups

    • EXECUTE precompile

    • Gas

    • What rollups gets by being native?

    • Re-execution as a stepping stone

  3. Towards Real Time Proving

    • Delaying state_Root

    • Delaying execution

  4. FAQ

    Introduction

    Sharding was a hot topic during the 2017-2020 era. At that time, different teams such as Harmony, Zilliqa and Elrond implemented sharding into their blockchain. That basically means dividing the network into smaller, parallel chains called “shards” that can process transactions simultaneously, as a naive way to scale a distributed system.

    Sharding was also a topic that was taken seriously around Ethereum 2.0 in the community. Ethereum decided against implementing sharding due to four main challenges:

    • Mindset Differentiations: In this sharding mindset, the protocol itself would enforce the exact number of shards from the top down. These shards were monolithic chains that follows a predefined template and its lack of programmability. It provides multiple identical copies of the L1.

    • Optimistic Security: At the time, optimistic proofs would be used to keep the shards honest, as ZK was not yet mature. This required systematic management of the fraud-proof logic on the chain.

    • Complexity: Implementing sharding at the L1 level would have added significant protocol complexity, particularly in managing fast preconfirmation and slow final confirmation systems, as well as coordinating shards with different security levels.

    • Overload Consensus: As greater scalability approaches at the L1 level, centralization risks could increase. If implemented at the base layer, these risks could affect the entire protocol, rather than being confined to individual L2s as they are now.

    Native rollups are essentially a return of sharding, but this time it's different. We learned our lesson and are now better equipped.

What is Native Rollup?

It is important to remember that rollups have data, sequencing and execution modules. Native rollups use Ethereum's own execution environment for their execution module. We can call them programmable execution shards of the L1.

Understanding how to consume L1 execution as a rollup might be tricky. To consume L1 execution as a rollup, we should be able to execute EVM inside EVM. Thus L1 itself will be aware of the native rollup’ state transition[1] for each block. In order to do this, we will need to have a precompile in place to make this a real thing.

EXECUTE Precompile

EXECUTE precompile creates a mechanism for one EVM context to verify the execution results of another EVM context while maintaining the same execution rules and state transition logic.

The precompile takes 3 inputs:
pre_state: The 32-byte state root before execution
post_state: The 32-byte state root after execution
witness_trace: The execution trace containing the transactions and state access proofs

The precompile does an assertion: it verifies that executing the trace, starting from the pre-state root, results in the post-state root. Then the precompile will return true if the state transition function[2] is valid. The trace needs to be available to the validators (as blobs or calldata) to re-execute the computation and verify the correct state transition. It's important to note that this precompile does not take proof as an input. This means that the protocol does not enforce any enshrined proof system, instead the proof is gossiped via the p2p gossip channel by creating a new topic for each different type of proof.

Gas

Ethereum's sources are limited, gas unitize that source. The EXECUTE precompile implements a gas model to manage computational resources:

  • Base Cost: The precompile charges a fixed gas cost EXECUTE_GAS_COST plus the amount of gas used by the execution trace multiplied by a gas price.

  • Cumulative Gas Limit: An EIP-1559-style mechanism meters and prices the cumulative gas consumed across all EXECUTE calls in an L1 block through:

EXECUTE_CUMULATIVE_GAS_LIMIT: The maximum gas that can be consumed by all EXECUTE calls in a block
EXECUTE_CUMULATIVE_GAS_TARGET: The target gas usage for efficient pricing

It can be thought of as the limit-target gas model, similar to DA pricing in blobs.


It is important to remember that native rollups and SNARKifying the L1 are two concepts that are often intertwined. While SNARKifying the L1 scales vertically the L1 by removing the gas limit through SNARKified execution (zkEVM) and consensus (Beam), native rollups scale horizontally the L1 by creating an arbitrary copies of the EVM in a programmable manner.

What do rollups get by being native?

  • Security. Today's rollup designs must have security councils to update the chain due to potential bugs. With native rollups, all governance is handled by Ethereum's social consensus. Native rollup operators no longer need to worry about bugs; the Ethereum community takes care of it.

  • Simplified synchronous composability with L1. Based rollups come very close to achieving this, but it requires that L1 and L2 blocks are built at the same time by the same builder. Native rollups can do this without this requirement. A native rollup could use the EXECUTE precompile to verify the state of another native rollup without requiring additional trust assumptions.For read-only cross-rollup operations, a contract on rollup A could directly verify the state of rollup B by referencing B's latest state root and providing an appropriate witness trace showing the data exists in that state.

  • Forward Compatibility. As the L1 EVM evolves, native rollups automatically inherit all improvements without requiring separate implementation work. This ensures long-term compatibility with Ethereum's roadmap.

Re-execution as a stepping stone

In the context of native rollups, re-execution is positioned as the initial implementation approach. Re-execution refers to the process where validators directly execute a transaction trace themselves to verify the validity of a state transition of native rollups, rather than relying on SNARKs. If EXECUTE_CUMULATIVE_GAS_LIMIT is kept small enough, this re-execution remains manageable for validators.

Towards Real Time Proving

With re-execution, validators must process all transactions themselves, limiting throughput through the EXECUTE_CUMULATIVE_GAS_LIMIT parameter. Real-time proofs would allow this limit to be dramatically increased since validators would only need to verify proofs rather than re-execute everything.

As the trend is rapidly moving towards real-time proving, we need to take some actions to increase the proving window, for native rollups. To buy more time for proving, we need to make some changes to the current Ethereum block processing structure.

In the current structure, all of these steps must be completed in 12 seconds (divided into three 4-second parts) in order to proceed to the next block;

  • Block N is proposed with transactions

  • Before validating/attesting to block:

    • Must execute all transactions

    • Calculate state changes

    • Calculate stateRoot

    • Calculate receipts & logs

  • Only after all this is done can the block be validated and attested to.

According to current flows, proving would need to be done within 4 seconds for synchronous composability with the L1. ZK isn't yet mature enough to prove an Ethereum block within 4 seconds, so we need flexibility for pr     oving.

Delaying state_Root

Each block header includes a state_root that represents the state after executing all transactions in that block. This creates a performance bottleneck because block builders and validators must calculate this state_root before they can propose or verify blocks. The calculation is computationally intensive, taking up about 40-50% of block builders' time and around 70% of block processing time for some clients.

Resnick and Noyes proposed removing the state_root calculation from the critical path[3] and moving it to the section where the clients are idle[4]. Instead of having block n contain the state_root after its own transactions, it would contain the state_root from after block n-1's transactions. This creates a one-block delay in state_root references, but brings reduced latency to the chain and buys a whole slot of time for proving.

Delaying execution

Delayed execution is a broader improvement than delaying just state_root. With delayed execution, the block validation and attestation process would be separated from the heavy computation of actually executing transactions.

Static validation performs only the most basic checks, such as checking the format of transactions and verifying signatures. Once consensus is reached by validate+attest, the actual transactions are executed and the state_rootcalculation is performed.

This approach provides several key benefits, such as consensus efficiency by validating the block early, proof generation time by the entire attestation period plus idle time, reduced latency by shortening the critical path.

As this upgrade changes the entire structure of the chain, it also affects other upgrades such as FOCIL. We don't want to go into details; the potential conflicts can be seen here.

FAQ

Which proof type will be used?

There will be no single type of proof. We want to have a diversity of provers as well as a diversity of clients. Validators actually make a subjective decision about provers for their own choice.zkEVM = {EVM, zkVM}. Basically there will be (m x n) combinations for provers. i.e. {Geth, Risc0},{Reth,SP1},{Erigon, Lita} etc.

Please see EthProofs for diversification.

Who will produce the proofs?

Anyone. Even if there is only one prover, the chain will continue. There is an open question about how to incentivize the provers at the protocol level.

Will there be a consensus among the proofs?

No. Proofs will be gossiped about off-chain, not on-chain. There will be a consensus that a proof exists somewhere.


[1]: Process of moving from one valid state to another through transaction execution. 

[2]: Set of rules that determine how transactions modify the state. 

[3]: The critical path refers to the sequence of operations that must occur sequentially and cannot be parallelized, ultimately determining the minimum time required to process a block.

[4]: Idle time helps to make sure that blocks are produced at regular times, improve network security by reducing orphaned blocks and improve network synchronisation. A timing safety margin.


Join us 💗

Explore open positions on our job board.

Follow us 🥁

Get the latest from Taiko:

Contribute 🤓

Contribute to Taiko on GitHub and earn a GitPOAP! You will also be featured as a contributor on our README. Get started with the contributing manual.

Subscribe to Taiko Labs
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.