How We Designed for Scalability on Solana
Author: Alchemy

Solana is unlike any other blockchain. Its architecture was engineered from day one to push throughput to the limits of hardware — parallel execution, embedded timekeeping, mempool-free transaction forwarding — all working together to deliver thousands of transactions per second with sub-second finality. And with Firedancer now live on mainnet and Alpenglow on the horizon, that ceiling is about to rise dramatically.
Building infrastructure that keeps pace with this requires more than fast nodes. It requires designing every layer of the stack around the assumption that tomorrow's load will dwarf today's.
This is how we approached scalability when we rebuilt our Solana platform from the ground up — and how we're positioning it for what comes next.
Solana's scalability model is fundamentally different
Much blockchain infrastructure is designed around Ethereum's execution model: sequential transaction processing, a global state trie, and relatively modest data output. Solana breaks all of those assumptions.
Sealevel, Solana's parallel runtime, requires every transaction to declare its read and write sets upfront, enabling non-overlapping transactions to execute simultaneously across CPU cores. Proof of History embeds a cryptographic clock directly into the ledger, eliminating the coordination overhead that slows consensus on other chains. And Gulf Stream pushes transactions directly to the next block producer rather than letting them sit in a mempool, cutting confirmation latency further.
The result is a network that produces massive amounts of data at extremely high velocity — over 4 petabytes annually at peak speeds. Infrastructure that serves this data can't treat it like Ethereum with a different RPC schema. It needs to be architected around Solana's specific data patterns, access characteristics, and scale.
That understanding shaped every decision we made.
Architecting for Solana's data characteristics
Solana's data profile creates a specific set of infrastructure challenges. Blocks are large and dense. Historical queries span enormous ranges. Developers need both random point lookups (fetching a single transaction by signature) and massive sequential scans (walking an address's full signature history). And all of this needs to happen at low latency, globally.
We covered the deep technical details of our archival architecture in a separate engineering post — the HBase migration, triple-verified ingestion, self-healing pipelines, and the specific optimizations that got us to 100,000+ RPS per region on getTransaction. This post is about the design principles behind those decisions.
The core principle: separate the concerns that need to scale independently. Traditional Solana infrastructure bundles data ingestion, storage, and serving into a single monolithic validator node. That coupling means you can't easily scale reads, can't restart a service without hours of downtime, and can't optimize storage layout without touching the entire stack.
We decomposed the system into independent services — lightweight RPC servers that start in seconds, dedicated ingestors with granular control over which data types each instance handles, and a storage layer designed specifically for Solana's access patterns. Each layer scales on its own terms. When traffic spikes, we spin up additional RPC instances in seconds against the same data layer. When a new region needs to come online, the storage layer replicates independently of the serving tier.
Reliability as a scalability prerequisite
There's a pattern in infrastructure: teams optimize for speed, hit a scaling wall, and then realize the wall was actually a reliability problem. Dropped connections, incomplete data, and silent failures under load are all scalability failures in disguise.
We designed around this from the start. Our multi-region architecture includes 3 to 5 layers of autonomous failover — if a node, a rack, or an entire region degrades, traffic reroutes automatically.
On the data integrity side, every record is written twice, validated programmatically, and continuously scanned for completeness. If a discrepancy appears, our self-healing pipelines automatically re-ingest and repair the gap — cross-checking up to 30–50 related addresses per block. The entire repair process is automated and runs continuously. It's not a static monitoring dashboard with alerts.
Data integrity on Solana is a harder problem than most acknowledge. The sheer volume and velocity of data means that even small gaps compound quickly — and it's a common pain point among development teams, many of whom have reported missing or inconsistent data from their infrastructure providers. It's one of those problems that's invisible until a developer's app breaks because a transaction wasn't indexed or a signature history is incomplete. We treat correctness as non-negotiable, not best-effort.
The result is 99.99% uptime backed by the same operational rigor we've applied for 8+ years powering apps like Polymarket on election night and World's mainnet launch. Reliability at this level isn't a feature — it's what makes the rest of the scalability story possible.
Designing for Solana's next era of throughput
Solana's current throughput is measured in thousands of transactions per second. The protocol's roadmap points toward orders of magnitude more, and infrastructure that isn't architected for that trajectory will become the bottleneck.
Firedancer, Jump Crypto's independent validator client written in C, went live on mainnet in late 2025. In lab conditions, it has demonstrated over 1 million transactions per second. Its modular tile-based architecture, kernel-bypass networking, and parallel signature verification represent a fundamental rethinking of how validator software uses hardware. As of late 2025, roughly 21% of Solana's stake was running Firedancer, and adoption is accelerating.
Alongside Firedancer, the Alpenglow protocol upgrade aims to rewrite Solana's consensus mechanism and reduce block finality to approximately 150 milliseconds. And proposals like SIMD-0370 would remove the block-level compute cap entirely, letting blocks scale based on what hardware can actually process rather than an artificial software limit.
Additionally, ZK Compression, developed by Light Protocol, will allow multiple account states to be compressed into a single onchain account using zero-knowledge proofs. This addresses Solana's growing state storage challenge and opens up entirely new application design patterns.
Our infrastructure is designed with these upgrades in mind. The decomposed architecture means we can scale each layer independently as throughput requirements grow. The multi-region deployment provides the geographic distribution needed to handle global traffic at higher volumes. This gives us our 2x throughput advantage over other Solana RPC providers is a starting point, not a ceiling.
What scalable infrastructure unlocks for builders
Scalability directly determines what developers can build. Over the past eight years, we've supported builders from their first API call through every inflection point that followed. Teams like Circle, Robinhood, and OpenSea have scaled on Alchemy from early prototypes to products serving hundreds of millions of users.
- Millisecond archive queries mean wallets can render complete transaction histories instantly.
- Recency-first queries eliminate the need to scan through years of old data just to surface a user's latest activity.
getTokenLargestAccountsreturning 1,000 results instead of 20 gives analytics platforms meaningfully richer views of token distribution.- Native gasless transactions remove the onboarding friction that kills conversion.
- 100% staked writes ensure transactions land on actual block producers, improving confirmation reliability.
These capabilities exist because we treat scalability as a first principle.
The road ahead
Solana is evolving fast. The network's capacity will increase, and the applications built on it will become more demanding at scale.
Our commitment is to stay ahead of that curve, investing in lower latency, higher throughput, deeper data access, and new capabilities that unlock use cases that aren't possible yet. Infrastructure should never be the reason a builder has to compromise on their product.
Everything we've described is available today.
- Start at alchemy.com/solana
- Contact us for custom benchmarking, specialized pricing, or more.
Alchemy Newsletter
Be the first to know about releases
Sign up for our newsletter
Get the latest product updates and resources from Alchemy
By entering your email address, you agree to receive our marketing communications and product updates. You acknowledge that Alchemy processes the information we receive in accordance with our Privacy Notice. You can unsubscribe anytime.
Related articles

How Alchemy Built the Fastest Archival Methods on Solana
Discover the architecture that makes our archive methods the fastest solution on the market today.

How to Build a Stablecoin in 2026
A technical guide covering everything you need to know to build your own stablecoin.

How to Build Solana AI Agents in 2026
A technical tutorial showing you how to build agents on Solana.