TLDR: To build a global onchain economy, we need to scale. To reach our North Star of 1 Ggas/s, we’re charting a path to achieve 250 Mgas/s in 2025. Our main focuses are: increasing L1 data availability for rollups, improving client execution speed, ensuring fault proofs continue to run as expected, and maintaining a healthy disk usage for nodes.
Base’s mission is to build a global onchain economy that increases innovation, creativity, and freedom. To achieve this, one of Base’s five strategic pillars for 2025 is to scale Base and drive gas fees down so that everyone, everywhere can come onchain. So far, we’ve scaled throughput on Base to 24 Mgas/s (pronounced “megagas per second”) — nearly 10x of where we were a year ago. But it’s still day one.
Our 2025 goal requires us to go exponentially further to scale throughput more than 10x beyond where we are today, and 100x the scale of Base when we initially launched. This is key to bring a billion people onchain: scaling allows Base to support more load while keeping user costs low. Low fees in turn enable more of the world to come onchain.
Since the beginning of Base, we’ve emphasized the importance of scaling throughput to keep fees low for users. The Base engineering team worked closely with the Ethereum ecosystem to scale Ethereum data availability through EIP-4844, enabling blobs to reduce L1 fees on L2s, and more recently we’ve been involved with efforts to increase L1 blob capacity.
In this technical deep dive, we’re detailing our progress in scaling Base so far — and how we’re planning to get to 250 Mgas/s in 2025.
Our North Star for throughput is 1 Ggas/s (pronounced “gigagas per second”), ~40x of where we are today. Base needs to scale at least this much to support 1 billion users transacting onchain while ensuring transaction fees stay sub-cent.
We have approached this goal by identifying bottlenecks to scaling Base and the broader Ethereum ecosystem — both short-term and long-term — and proactively addressing them. It is a core part of our strategy to stay aligned with the industry at large to ensure we are building the best future for the Ethereum ecosystem as a whole.
Our definition of a healthy ecosystem means ensuring Base is for everyone. To achieve this, we need:
Sub-second transactions – to enable using Base for most real-world applications
Sub-cent transactions – keep Base affordable, even during spikes in traffic
A decentralized, open platform – the only way we can build a truly global economy
A symbiotic relationship with Ethereum ecosystem – synergistic success with Ethereum Mainnet, composable and collaborative with other L2s and L3s
L1 DA, also known as "blobspace," defines the amount of data L2s can post to L1. When blobspace is limited or contested, it leads to a zero-sum situation, where L2s must compete for available space. This is a scenario we want to avoid as it results in high fees for users and limits overall scaling of the ecosystem. However, with the emergence of new L2s and the ongoing scaling of Base and other L2s, we are already facing this challenge. For more people to come onchain, we need more data availability for rollups.
We have consistently been at DA capacity since the beginning of November 2024. This means that traffic spikes will create high L1 fees on L2s, and increase the overall cost of L2 transactions. Thus, this area is important to address to maintain a healthy ecosystem, both in terms of maintaining symbiotic relationships with L1 and other L2s as well as keeping transactions sub-cent.
Base currently consumes around 35% of blobspace and is the largest blob submitter. As we continue to scale, Base will consume more and more blobspace. We have anticipated L1 DA as a scaling bottleneck for a while, and have been actively working on both (1) optimizing DA usage, as well as (2) increasing the overall amount of blobspace available.
To optimize DA usage of OP Stack chains, the Base team helped implement Brotli compression during batching in the OP stack. This change was included in the Fjord OP Stack hardfork, and led to a 35% reduction in DA usage. Another improvement, built by OP Stack contributors — span batches — has allowed various OP stack chains to minimize DA usage by encoding consecutive blocks with better format.
On increasing overall DA, our team has an active and ongoing effort to drive and accelerate PeerDAS, to unlock significantly more DA via data availability sampling. The ambitious goal is for PeerDAS to contain a target of 50 blobs at launch, and eventually support 256 or more blobs. This will be critical for bringing a billion people onchain. There are many design challenges and hard technical issues to solve to get there (e.g., optimizing network topology and bandwidth issues in the p2p layer), and we’re excited to work with the ecosystem to solve these.
We are actively collaborating with L1 and L2 communities on PeerDAS research, implementation, and testing. We strongly advocated for its inclusion in the Pectra hard fork — but ultimately it was not included. Our team continues to prioritize supporting PeerDAS, with a significant blob increase, in the next hard fork, Fusaka.
We did get a smaller win in Pectra though - we advocated for and provided research on the safety of an increase in blob target. As a result, Pectra will include a doubling of the blob target — the target/limit will go from 3/6 to 6/9.
The following graph shows how Base scaling plans are impacted by blob limits. This graph, and all other extrapolation graphs in this blog, are estimations and could be impacted by several variables. The vertical lines show time-based limits: Pectra is a solid line (relatively set timeline) in early April and Fusaka is a dotted line (in-flux timeline) in December. The horizontal lines convey limits related to Base’s throughput: the dotted lines indicate a projection of when Base’s scale would correspond to consuming half of the overall blobspace and the solid line is if Base consumed all blobspace. These estimates are based on the assumption that blob usage is relatively linear to throughput.
We see that until Fusaka lands, unless we have a major breakthrough around DA utilization, blobspace will once again become a blocker to scaling Ethereum rollup throughput.
We plan to support a more frequent and streamlined blob increase schedule (see proposal from OP Labs), separate from the hard fork schedule after PeerDAS is live. We believe this should only be pursued post PeerDAS given the network bandwidth constraints we would likely hit without it. We also believe the community’s focus should remain on shipping PeerDAS as a top priority, given this will be the biggest win long term.
In parallel, we will continue to work on optimizing usage of DA on the Base side. Future work here may include further compression, investigating and proposing ways to batch less data to L1, and state diffs. Even with these optimizations we assume that DA usage will still linearly grow alongside network usage. An approach that may avoid this is to reduce the amount of L1 data per unit of gas. This may look like repricing opcodes or a multidimensional fee mechanism. Directing some traffic to L3s or offloading compute to prover networks would also help, especially applications with high throughput requirements.
Client execution performance is a key component of maintaining a healthy node ecosystem, which is important for maintaining a decentralized platform. This in turn allows Base to handle more transactions while remaining sub-second and sub-cent. Under high load, it’s possible for nodes to fall behind the network, and in the worst case, not be able to catch up. We must ensure that block building speeds and sync times stay reliable as we continue to scale Base.
With Base’s current 2-second block times, we have defined the following chain health metrics for execution speed:
Average (P50) block execution/sync time: <500ms
P90 block execution/sync time: <1s
P999 block execution/sync time: <2s
We enforce stricter ranges for our sequencer, but these SLOs ensure the majority of nodes stay in sync with the live state of the chain. We will continue to evaluate and redefine these SLOs as we monitor the health of the overall Base ecosystem.
Base has been running op-geth since inception, and to maintain reliable infrastructure, we have performed several optimizations on our Geth nodes, including moving the database architecture from LevelDB to PebbleDB, and moving from a hash based storage layout to path based, both of which yielded minor improvements in block building speed.
Despite these improvements, we expect to hit a bottleneck in Geth’s block building speed, likely around 40–50 Mgas/s. Given this, we began benchmarking alternative high-performance execution clients. Reth, built by the Paradigm team, proved promising with ~70% improvement for p999 block building speeds. We recently published a blog post with detailed findings. We are actively migrating to running Reth for all of our node software, and encouraging others in the ecosystem to do the same. We anticipate that this will roughly 2–3x our existing limit today.
As part of our execution client performance analysis, we found that, regardless of the execution client, state access makes up a significant portion of latency. Blocks that contain high volumes of storage access take almost 30 times longer to execute than blocks with no storage access. Furthermore, for workloads with minimal state access, clients can likely perform up to ~500 Mgas/s. Because of this finding, we are actively researching database architectures that can reduce state access latency, especially as the state size continues to grow.
For future work on client execution performance, there are a few promising directions to explore. Since the EVM is single threaded, parallel execution could allow nodes, which run modern multi-threaded infra, to better utilize compute resources. Just-in-time or ahead-of-time compilation of EVM bytecode to native code could optimize execution by removing the interpreter’s overhead. A direction that could provide a stepwise improvement is delayed state roots: pipelining transaction execution would give nodes two full slot times instead of one to execute transactions, store state changes, and compute the state root, roughly doubling the time a node has for block building. In addition, if the sequencer provided “access hints” to indicate which storage slots would be accessed, nodes would save time pre-loading relevant state instead of doing this in real-time.
Fault proofs are essential for making Base an open and decentralized platform. To ensure that Base remains decentralized, we must ensure that the Fault Proof System (FPS) is live and running as expected, and that scaling does not introduce incompatibilities.
For the FPS to work as expected, at least one honest actor (a node running the challenger software) must participate to properly defend challenges to the system. As shared in a previous blog, we created a benchmarking system to validate that any challenges to the FPS can successfully resolve within the allowed time frame (the “challenge window”). The initial results, shared in the blog, showed that the existing FPS on Base comfortably supports scaling Base blocks to a limit of 120 Mgas (60 Mgas/s), with the main constraint coming from memory usage. More recent benchmarks show even more promising results - potentially supporting up to 80–90 Mgas/s gas limit.
This falls significantly short of our 2025 year-end goal of 250 Mgas/s. It is also worth noting that to achieve 250 Mgas/s as the gas target for the year, we would actually need fault proofs to support a gas limit of 500 Mgas/s. Note that the protocol restricts us to a 1:2 ratio for gas target to limit, however a 2:3 ratio generally provides enough buffer for traffic spikes. To modify this, we would need a OP stack protocol change or to set the limit on the sequencer side.
We are focusing our efforts in a few areas to achieve these goals. OP Labs has made major improvements to the existing FPS in the OP Stack to address the memory usage bottleneck. These changes include enabling the go runtime’s garbage collection to prevent runaway memory usage growth and moving from 32bit to 64bit MIPS architecture to allow for a larger heap size. This should remove memory issues entirely, meaning that speed becomes the only constraint. We are currently reviewing and benchmarking this updated FPS, called Multithreaded + 64bit Cannon, but it will likely support a gas target of up to ~50 Mgas/s. It is scheduled to go live in early Q2.
In addition, the Base engineering team supported the development of partial span batch validation in the recent Holocene OP Stack hardfork, to reduce the amount of blocks a fault proof program needs to execute at once, therefore reducing resource constraints.
Even with these new improvements, though, the existing Fault Proof System uses Geth as the default client. With our ongoing migration to Reth, we are increasingly interested in a Reth based FPS. OP Labs has already created an implementation called Kona, which we are beginning to review as well. We believe this could unlock a stepwise improvement for scaling.
Looking forward, ZK-based Fault Proof Systems could provide another major unlock.
Maintaining a healthy state size and rate of growth is important for a healthy and distributed ecosystem of node operators. If node operators see nodes running low on disk space to store active or historical state, they can usually scale up their infra to use a larger instance. However, there are limits to the largest instance size that cloud infrastructure providers offer — historically 30TB of low-latency storage. More recently, larger instances are becoming available (AWS now has an option up to 120TB), which push up this hard limit. It is worth noting that these are hard limits that we cannot reverse ourselves out of. If we scale Base and nodes run out of disk space, scaling back down will not solve this problem.
On base.org/stats, we have visualizations of disk usage for both Geth and Reth archive nodes. Archive nodes store the entire historical state, so their disk usage continues to grow with time. These graphs show the existing usage and projected growth over 1 and 6 months. The graphs illustrate another reason we are migrating to Reth — Reth has shown very promising improvements in terms of overall historical state, as well as rate of growth. If we consider 30TB to be the limit, we are projected to hit this limit in late June with Geth, whereas with Reth, based on the current average rate of growth of ~8.5 GB/day, it will take over 10 years.
Note that the numbers above do not take into account Base’s scaling plans. Based on our preliminary analysis, disk usage growth has not been super correlated with gas increases, i.e., higher throughput has not led to an increased rate of growth. This is because history growth depends much more on usage patterns than overall throughput, and this may suggest that even with gas increases, the factors that contribute to history growth are relatively stable.
However, increased throughput can definitely open up the possibility of the historical state growing faster, especially as more users come onchain, so these metrics are something we continue to keep tabs on.
To address history growth concerns, we have migrated the majority of our nodes to Reth already, and are working to remove remaining dependencies on Geth archive nodes. In the future, we may consider utilizing alternative solutions for storing and retrieving historical state, such as the Portal Network, or traditional sharded databases.
Even with a full Reth migration and having more disk space, state will continue to grow, and the most optimized client will eventually run into an upper limit from the active state alone. Therefore, we must consider longer term solutions, such as smarter ways to archive historical state or expire inactive state to prevent uncontrolled growth.
Putting together everything described so far, the following image shows the full picture of our anticipated scaling bottlenecks. We are going to have to tackle a lot of challenging problems, and likely some additional unknown ones not described here, to achieve our 2025 goal of 250 Mgas/s.
We also recognize that there are certain applications that may have higher requirements around speed and throughput but less demand for the security guarantees of an Ethereum L2, such as games, which might outpace Base’s ability to scale. L3 Appchains may provide a good alternative for hosting these applications.
We would not have been able to scale Base by nearly 10x since launch without the support of many teams across the ecosystem — and we’ll continue to collaborate to scale it further. We hope these efforts to scale Base provide learnings and infrastructure to enable other L2s and the rest of the Ethereum ecosystem to scale.
2025 is the year we scale together. We’re looking for collaboration partners to pursue challenges like shipping PeerDAS, state diffs, rearchitecting the storage/database layer, ZK Fault Proofs, and state expiry. If you’re interested in working on creative and ambitious scaling solutions to enable the next billion people to come onchain, we’re hiring — and we’d love to hear from you.
Follow us on social to stay up to date with the latest: X (Base team on X) | Farcaster | Discord
Over 200 subscribers
Base, OP folks — how serious is this concern? Seems like blobspace is the bottleneck to hitting Base throughput targets If Eth mainnet is slow to supply more blobspace (I hate saying this word) and optimizations aren’t enough, does that only leave alt DA? https://x.com/jon_charb/status/1889819894298058960?s=46
Ethereum will scale DA - it does require L2s to help core devs do the benchmarking and work to make it happen https://ethereum-magicians.org/t/blob-parameter-only-bpo-forks/22623
Expect a blog from OP coming soon addressing our commitment and plan here to support Ethereum scaling DA
🫡
https://warpcast.com/eulerlagrange.eth/0xe0bbea26
they can randomize and put the data a little bit in ethereum so big V doesn't get mad and a little bit into celestia so berniske can take a breader from shilling and maybe a little bit into eigen DA just so restaking finally has an actual use case and like call it rollvalidup
You joke but I think multiplexing DA was a real thread at some point Idk if what happened tho
no joke
Yes. Advocating for (and helping build out) more blob space is a p0 concern for the Base team. See for example https://base.mirror.xyz/6NDvVKw8x5obo3h1OgQx582hF73m81CnJabgbRneW2Q
There's also more headroom to be had from optimizing blob usage, eg with stateful compression. But ultimately more capacity is needed.
How much juice do you think we can squeeze from optimizations alone? One thing I don’t know is if incentives are aligned to reduce back running and other failed txs since the sequencer still collects fees
We will continue going through this cycle with ETH L1 giving more capacity to L2s and the L2s using up this capacity. This is a good thing. I would be much more bearish if L2s would not fill up L1 Blockspace sufficiently
Yep, Jevons paradox is bullish
Welcome to the 31st edition of Paragraph Picks, spotlighting a few hand-selected pieces from the past couple of weeks.
@lght.eth argues that while many lament the current state of the digital world, we have barely begun to understand or properly leverage its core pillars (technology, money, design, and media), and that our collective digital literacy is still in its infancy, leaving vast opportunities for growth. "Speaking generally to the internauts, it feels like we've been given lever, fulcrum, connectivity, and resources to a tune no generation prior has received." https://paragraph.xyz/@thenetworkcompany/matrix
@ramon explores Ethereum’s ongoing identity crisis, contrasting two competing visions — one viewing Ethereum as a machine driven by financial success and token speculation, the other as a garden fostering open innovation and decentralized growth. https://paragraph.xyz/@rm/the-garden-and-the-machine
@anika outlines Base’s ambitious goal to scale throughput to 250 Mgas/s by the end of 2025, ensuring sub-cent transaction fees to make onchain activity globally accessible. "Our North Star for throughput is 1 Ggas/s (pronounced “gigagas per second”), ~40x of where we are today. Base needs to scale at least this much to support 1 billion users transacting onchain while ensuring transaction fees stay sub-cent." https://blog.base.dev/scaling-base-in-2025
we just published a deep dive on scaling @base in 2025 https://blog.base.dev/scaling-base-in-2025
SCALING FEDILLIONS $FED 👀🔵
Comfy building on base with great leadership
based🔵
To build a global onchain economy, we need to scale Raising Base's gas target prepares us for more transactions, more builders, and more people onchain Our north star? 1 Gigagas/s. Our goal for 2025? 250 Mgas/s Here's our progress so far, and how we'll get there: https://blog.base.dev/scaling-base-in-2025
Incredibly bullish on the future of this chain.