Write Your Own Rollups Using Zilkworm
(This article is a projection of how zilkworm could work for an L2/rollup. At the time of writing, Zilkworm doesn't yet support any L2 execution or proving)
Introduction
This article explores how zilkworm can be used to generate proofs for an L2 rollup. This is made possible by the fact that most popular L2s and blockchain networks out there use EVMs under the hood.
We give a brief introduction to rollups and their validity mechanisms before diving deeper into the technical bits and other considerations for a an existing or new chain project.
Why a rollup?
It’s a known fact that Ethereum L1 has issues scaling transaction throughput and that’s why rollups can batch transactions, thereby lowering the costs and scaling the ecosystem.

In principle the validation of the raw batch of transactions happens on a “side-chain” by a process outside of L1 transaction validation itself. But the proof of validity of these L2 transactions are then put back into L1 in batches using two popular methods:
Optimistic L2 block inclusion aided by fraud proofs Essentially the validators apply a sequenced batch of transactions and the verifiers have a certain “challenge period” during which they observe these transactions and can submit a fraud proof if there is a discrepancy.
Zero-knowledge proofs of L2 batches/blocks verified on L1 A certain entity would process a batch of transaction using a ZKP circuit and generate a proof of transaction batch that then gets posted to and verified by a smart contract on L1.
Of course there are many variations in the two, and even some others. In this article, we will dive into how we could use Zilkworm in an L2 like Arbitrum for a better user experience with the rollup mechanism.
Example of Arbitrum
As the most popular L2 at the moment, Arbitrum sequences huge number of transactions and submits checkpoints to L1. Given the gigantic volume of transactions, the sequencer need to be powerful and low-latency, and somewhat centralized in practice. The pros roughly are that users have a good experience with low-gas prices, and the security isn’t too far off that of mainnet. A con of this approach is that users have to wait for a challenge period before they are allowed to withdraw their assets out of the chain (back to Ethereum L1). Further, if the sequencer(s) are somewhat centralized or local to a region, it could censor certain users or geographies. If, however, sequencers don’t submit bad blocks, then fraud proofs are a rare occurrence and all goes well most of the time. But, that only applies to a chain as big and articulated as Arbitrum that has another leg of guarantee - reputation costs upon submitting invalid transactions through the sequencers.
How would a z(il)k-Rollup work here

The core of Arbitrum is almost the same as that of Ethereum - they both use EVM! That means the same Zilkworm core we discuss can be used to process Arbitrum transactions as well, with minor changes to the logic.Essentially, an Arbitrum sequencer (can be decentralized) will submit a batch of transactions on a previous block or L1 checkpoint. This gets “optimistically” included right away to subvert bad UX. But soon enough another “prover-validator” runs the batch of transactions to generate a succinct proof with Zilkworm and submits it to an L1 SNARK-verifier contract. Since a bad transaction could not be generated by the zilkworm prover, or a bad proof would refuse to verify, the L1 verification would not go through, and L2 would then mark it as a bad block. On the up side,the security now is truly L1-level once the proof verification is done - no wait for the challenge period! On the up side,the security now is truly L1-level once the proof verification is done - no wait for the challenge period!
The tech checklist
With a team of capable C++ engineers, one could easily adapt zilkworm for a specific use case (or contact us to get it done). The following would typically be in this journey to the final deliverable:
Identify compatibility variance with Ethereum mainnet
Identify performance requirements and run zilkworm benchmarks to know the numbers and hardware resources required for proving
Fork the zilkworm repo for unofficial variant adapted to the specific changes. The EVM and the state transition bits may need to be changed to fit in a chain such as, say, Arbitrum
Run protocol execution tests for that protocol - similar to Ethereum/tests or EESTs to check correctness and performance for individual opcodes/contracts etc.
Prove the chain!
High-level considerations
The performance considerations of ZK proving
A big disadvantage of zero-knowledge proofs is their computational intensity. Even until a few years ago, mainstream ZK proofs were thought to be outside of consumer hardware, but that's not the case anymore. In fact Zilkworm can typically generate an Ethereum block proof with 30M gas in under 3 minutes with a standard consumer GPU. This is thanks to a lot of performance optimizations and parallelization of the ZKVM provers. At the time of writing this Zilkworm mainly uses SP1 Prover that is capable of accelerating proofs with GPU. Some alternatives like Brevis Pico can even prove a typical mainnet block in under 10 seconds thanks to massive parallelization over say a dozen Nvidia RTX 5090s.
Another consideration is the long release iteration cycle and developer time it used to take a few years ago to hand-craft circuits. Thanks to most ZKVMs now turning to general purpose minimalistic ISAs like RISC-V, any generic C++ or Rust program could be proved. This allows Zilkworm to take advantage of the C++ compilers and Rust tooling integration around RISC-V to deliver a robust end-to-end block and transaction prover.
Decentralization Argument
For "big" batches of transactions, decentralization could be detrimental to the UX. But, decentralization is a de-facto requirement of blockchains for all the reasons of avoiding single point of failure and censorship.
Essentially, a proof can be a direct replacement of the re-execution or dependence of secondary mechanisms like fraud proofs. A validator just needs to download a proof (< 100KB) verify it and voila, the head of your rollup chain is canonical. So a validator in this case doesn't need to be massively compute capable.
The other argument against decentralization like this is that a apex org or foundation-controlled sequencer makes money on the transactions that help secure that L2, if the sequencers don't have to compete with external players. This, however, can still be maintained with a zk-proofs-based validity mode, if the provers are only the ones that are designated to organise the flow of fees. E.g. a prover could make 1000x the "fees" money than a permissionless set of validators.
The risks due to censorships and network downtime far outweigh the arguments against zk-proofs based decentralized consensus
The potential L3 "fraud" amplification
For an optimistic rollup such as Arbitrum, there is a certain "wait-period" before assets from a child chain can be withdrawn to the parent chain. This is typically 7 days or something around that. It ensures that there is at least 1 honest validator out there that has enough time to check a block and verify it's not bad.
But from the perspective of the code that's running the protocol, "optimism" means nothing here, as for any state change to happen from a child -> parent messaging, the wait time has to be respected. It is therefore a constant amount of time before it can happen.
Naturally, the questions is, how about an L3, which is an optimistic chain on top of another optimistic chain? Well, that must have an additional (compounded) 7 day waiting period as well. And now the logistics are extremely complicated upon submission of a successful fraud proof on the L2 (the parent of the L3 in question).
Suppose Block L3_N is where a massive transaction takes place on L3. After 7 days many transactions take place on L3 and L2, but the L3_N is now finalized on L2, and a transfer from L3 to L2 can happen on block L2_N. The transfer is deemed successful, but after 7 days someone submits a fraud proof reverting L2_N and therefore L3_N and all blocks for the next 14 days on L3! The ecosystem would be in a mayhem in this "worst-case" scenario. But, on average, this is a very rare occurrence. Of course, it's not hard to guess that a ZK-proof submitted for L3_N and L2_N would have finalized it instantly on L1 and it would NEVER come to this.
Conclusion
The gap for security in rollups can be filled by an easy to use proving mechanism for full block state transition. Although optimistic rollups have leg up in performance, ZKVM implementations have caught up and Zilkworm can deliver the missing piece for an EVM-based L2/L3 chain to enable a zk-based rollup.
Last updated