Skip to main content

Avatar photo

Cloud Exit Is Real —Why Cloud Economics Break Down at Scale

Apr 15th, 2025 | 6 min read

When I wrote “Why Companies Are Ditching the Cloud: The Rise of Cloud Repatriation” last year, the idea of moving workloads off the public cloud still felt somewhat radical in mainstream infrastructure circles. But the response surprised me—especially on Hacker News, where the community shared their own struggles with rising cloud bills, vendor lock-in, and the loss of architectural control. Of course, many still defended the cloud – especially for companies that are just starting up and don’t have stable workloads like GEICO or 37Signals.

Now, in 2025, the conversation has moved forward. Cloud repatriation isn’t theoretical anymore. It’s not just a talking point—it’s something CIOs, platform teams, and CTOs are actively planning for. And recent moves by AWS and others make one thing very clear: they see the pressure mounting too.

The S3 Express Discount Is Not the News

Just a few days ago, AWS announced a price cut of up to 85% on S3 Express One Zone. While it made headlines, the announcement mostly confirms what many of us already knew—storage in the cloud is becoming a bottleneck. Not just technically, but economically.

S3 Express is a high-performance, single-AZ tier for special workloads. It’s not a replacement for S3 Standard or the default choice for general-purpose data storage. It doesn’t guarantee multi-zone durability. If anything, the fact that AWS had to slash pricing for this class by such a large margin suggests a broader issue: customers are hitting price ceilings, and the traditional “pay-as-you-grow” model is reaching its limits.

Rebecca Weekly, VP of Infrastructure Engineering at GEICO, said it plainly in our previous article: “Storage in the cloud is one of the most expensive things you can do, followed by AI in the cloud.” That reality hasn’t changed.

What Cloud Exit Looks Like in Practice

Let’s talk cloud exit numbers again—this time at scale.

With AWS S3 Express One Zone now priced at $0.11 per GB per month, storing 2 PiB of data over four years would cost:

2 PiB = 2,048 TiB = 2,097,152 GB  

2,097,152 GB × $0.11 × 12 months × 4 years = ~$11,078,726

That’s over $11 million—and that’s just the base storage price. It doesn’t include data uploads, retrievals, or millions of PUT and GET requests, which are charged separately. And remember: S3 Express One Zone only stores data in a single AZ, with no durability guarantees across regions or zones.

Now, let’s compare that to a real-world simplyblock deployment.

We take the following configuration: a 5-node cluster using HPE DL360 Gen11 servers, with 2 PiB raw NVMe capacity across TLC and QLC SSDs, connected with 20×100G Ethernet/RoCEv2 and delivering up to 12 million IOPS.

  • Hardware (CAPEX, incl. failure margin): $200,000
  • Simplyblock license: $10/TB/month × 2,048 TB × 48 months = $983,040
  • Data center colocation & power: $120/server/month × 5 servers × 48 months = $28,800
  • Total TCO: ~$1,211,840

That’s almost 90% less than AWS—with full multi-node high availability, real performance guarantees, and zero penalties for data access or scale.

More importantly, it’s your infrastructure. You can co-locate, automate, and scale it without asking for permission—or paying extra for visibility. This isn’t a thought experiment. We’ve helped organizations build high-performance infrastructure with predictable costs using exactly the setup described above.

It’s Also About Compute, Network—and AI

Storage may be the canary in the coal mine, but it’s not the only place where the cloud pricing model breaks down. High-performance compute and network-heavy workloads—especially those involving AI or real-time processing—are hitting the same ceiling.

Just look at AWS’s latest i7i.e bare-metal instances. These instances, announced just recently, are specifically targeting storage-intensive workloads. That further confirms that AWS sees the problem of storage costs in the cloud and tries to tackle it. Let’s have a look at how that works out.

These machines are powerful, with up to 192 vCPUs, 120 TB of local NVMe, and 100 Gbit/s networking. But they’re also expensive: the i7ie.metal-48xl costs $24.95/hour on-demand, or over $218,000 per year per instance. And that’s without EBS, snapshots, bandwidth, or support.

i7ie aws instance pricing
Amazon Elastic Compute Cloud (EC2) I7ie instance pricing

Let’s say you need a cluster of 10 of these to handle model training, real-time data pipelines, or low-latency backend workloads. You’re looking at $2.18 million yearly just to rent the compute. Reserved instance pricing might lower that somewhat, but even with discounts, you’re still looking at around $1 million per year.

Now, compare that to purchasing and colocating high-performance x86 servers—dual-socket Xeon or EPYC, 1-2 TB of RAM, 100 Gbit/s NICs, and dozens of NVMe drives. The cost to buy such a system today is $25,000–30,000 per node, or $250,000–300,000 total for the same cluster—a fraction of the annual EC2 price. Colocation, power, and support might add another $5,000–8,000 per month across the entire fleet.

Even if you over-provision by 20–30% for peak load, the economics still work out in your favor within the first 9–12 months—and from there, it’s all margin.

AWS i7ie.metalOn-Prem (DL360/EPYC)
vCPUs192128–256
NVMe StorageUp to 120 TBUp to 150 TB
Network100 Gbit100 Gbit
Annual Cost (single server)~$120,000~$7,500
OwnershipNoYes
ScalingPay-as-you-goLinear, controlled
SovereigntyLimitedFull control

And when you own the system, you also control the performance. You can run workloads with predictable latency. You don’t need to worry about noisy neighbors, throttling, or EBS bottlenecks. You can colocate storage and compute for AI workloads and stream data from NVMe to GPU memory with minimal latency. You’re not locked into AWS Nitro or trying to force-fit Kubernetes across awkward cloud primitives.

AI Has Changed the Game

What’s driving this shift and cloud repatriation isn’t just economics. It’s architecture. AI and machine learning workloads are fundamentally data-bound. Training large models requires sustained, high-throughput access to massive datasets. Cloud infrastructure was never designed with this pattern in mind.

Every I/O operation becomes a billable event. Every training epoch adds to the invoice. Every retry across an unreliable storage backend costs more. For companies working with proprietary models or regulated datasets, the problem compounds with sovereignty concerns. You can’t always afford to train in the cloud—legally or economically.

Bringing compute and storage closer together—physically and architecturally—is no longer optional for AI teams. It’s a competitive advantage.

The Cloud Was a Great Default. But It’s Time to Choose.

Cloud repatriation used to be about cost. Now, it’s about strategy.

The teams moving fastest aren’t just saving millions. They’re building more performant systems. They’re streamlining ops. They’re reducing complexity. And they’re creating infrastructure that aligns with their needs, not someone else’s roadmap or pricing model.

The public cloud remains invaluable for elastic scaling and rapid experimentation. But for sustained, high-performance workloads—AI, databases, analytics, streaming—it’s becoming the wrong tool for the job.

And with platforms like simplyblock, owning your infrastructure no longer means going back in time. It means moving forward with better economics, more control, and architectures designed for this decade—not the last one.

Topics

Share blog post