Skip to main content

Press Release

Simplyblock Achieves 99.4% GPU Utilization Rate for AI Workloads on AWS

Simplyblock excels in MLPerf Storage v1.0 Benchmark, maximizing GPU performance while lowering storage costs for AI and Machine Learning infrastructures.

Simplyblock, the leading cloud-native storage orchestration platform, announced today an impressive achievement in the MLPerf™️ Storage v1.0 Benchmark organized by MLCommons, showcasing its ability to optimize AI workloads and GPU performance on AWS. Simplyblock achieved a 99.4% GPU utilization rate, proving its capability to maximize efficiency while dramatically cutting storage costs for AI, Machine Learning, and Vector Database infrastructures.

In the face of rapid AI advancement, organizations demand highly efficient, high-throughput, and low-latency storage solutions to support the enormous data loads that modern AI workloads require. Simplyblock’s intelligent storage orchestration addresses this need by seamlessly integrating and managing AWS storage services, including Amazon EBS, Amazon S3, and local NVMe instance storage. The result is a unified, high-performance storage environment that meets the stringent requirements of AI and ML applications.

Benchmark Success: Scaling AI Infrastructure on AWS

Simplyblock’s achievement was demonstrated through its participation in the MLPerf Storage v1.0 Benchmark. The test involved running an AWS storage cluster on five i3en.6xlarge nodes, coupled with a single c6in.8xlarge client that managed 24 H100 GPUs. This configuration delivered an impressive throughput of over 37 Gbit/s (~194 MB/s per GPU) and achieved an average GPU utilization rate of 99.4%. Performance can be scaled linearly by adding more clients running the workload.

The MLPerf Storage benchmark is the first and only open, transparent benchmark to measure storage performance in a diverse set of ML training scenarios. It emulates the storage demands across several scenarios and system configurations covering a range of accelerators, models, and workloads.The benchmark emulates NVIDIA A100 and H100 AI accelerator models as representatives of the current generation of accelerator technology.

This benchmark highlights simplyblock’s ability to serve AI workloads efficiently, a key consideration for organizations running data-intensive AI models. Traditionally, such workloads have been bottlenecked by slow storage access through legacy systems, such as NFS connections. Simplyblock breaks this limitation by providing high-throughput, low-latency NVMe-based block storage, which can be accessed by multiple clients simultaneously, ensuring that data remains readily available for high-demand AI training and inference operations.

Rob Pankow, CEO of Simplyblock

“As AI moves to the center of the spotlight, it’s crucial that data and storage infrastructure solutions keep pace with the increasing demands of ML workloads. Our performance in the MLCommons benchmark demonstrates that simplyblock not only meets these demands but exceeds them, offering unparalleled efficiency and cost-effectiveness. What’s especially notable is that we’ve achieved these results on AWS, at a time when most AI training still happens outside of public clouds. Simplyblock brings the ability to run AI cost-efficiently on AWS and positions us to serve not only AI and ML workloads but any future high-performance use cases that AI may drive. We are happy to partner with MLCommons to support the efforts of developing an industry-standard benchmark.”

Rediscovering AI and ML Storage with simplyblock

Simplyblock’s orchestration platform stands out by creating dynamic storage pools, which are thinly provisioned into logical volumes, offering superior flexibility and cost efficiency. By orchestrating data across multiple AWS storage solutions (including Amazon S3 for cost-effective long-term storage and Amazon EBS for high-performance SSD storage), simplyblock can significantly reduce storage expenditures while simultaneously delivering top-tier performance.​

For AI and ML workloads, particularly those involving vector databases and large-scale model training, efficient GPU utilization and fast access to training data are critical. Simplyblock ensures that storage bottlenecks are eliminated, allowing organizations to fully exploit their GPU resources and further accelerate model training and inference times.

Use Cases in AI and Machine Learning

  • Telecommunications: For telcos running critical infrastructure such as PostgreSQL databases or Kafka brokers for billing systems, simplyblock’s low-latency storage ensures real-time data processing without compromising performance. With automatic cross-system disaster recovery options and point-in-time recovery, critical customer data remains safe and operations uninterrupted.
  • E-commerce: For e-commerce platforms handling real-time customer data, simplyblock provides the low-latency storage required to power AI-driven recommendation engines, dynamic pricing algorithms, and real-time inventory management. With automatic storage tiering and scalable storage pools, simplyblock helps reduce costs by transparently moving infrequently accessed data to cheaper storage solutions like Amazon S3, while ensuring high-performance for critical, frequently accessed data during peak traffic events such as Black Friday sales.
  • SaaS: SaaS providers on AWS benefit from Simplyblock’s thin provisioning and automatic storage tiering, reducing the need for manual storage management and minimizing costs by moving infrequently accessed data to cheaper storage options, like Amazon S3, without impacting application performance.

The Future of AI-Optimized Storage

With AI and machine learning workloads set to grow exponentially, simplyblock offers a solution that combines high performance, scalability, and cost-efficiency. By delivering near-perfect GPU utilization, as demonstrated in the MLPerf benchmark, simplyblock is well-positioned to become the go-to storage solution for companies pushing the boundaries of AI and data-driven innovations.

About MLCommons:

MLCommons is the world leader in building benchmarks for AI. It is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI Safety.

About Simplyblock

Simplyblock’s intelligent storage optimization platform orchestrates various AWS storage services, including Amazon EBS, Amazon S3, and local instance NVMe storage, into a unified, high-performance storage solution. Unlike traditional storage systems, simplyblock creates dynamic storage pools that can be divided into thinly provisioned logical volumes, offering unprecedented flexibility and efficiency for AI and Machine Learning infrastructures.

For more information, visit: www.simplyblock.io

Media Contact

[email protected]