Tech Giants like Amazon, Facebook and Google have redefined infrastructures for web-scale applications, leveraging standard servers and shared-nothing architectures to ensure maximum operational efficiency and flexibility. For their web-scale applications, enterprises and service providers are seeking to optimize their infrastructures in the same way as the Tech Giants. For storage, this means they want to deploy scale-out storage infrastructures leveraging standard servers and software-defined storage solutions.

On the application side, the quest for zero-latency storage is real. In this era where technology is ubiquitous, the multitudinous latency-sensitive applications that surround us require fast and efficient processing of data at massive scale. Providing near-zero latency at such scale is the remaining storage challenge and by extension, the most pressing technology challenge for web-scale data centers.

Scalability

  • Scale linearly to any size, data center-readiness

  • Client-side intelligence means data services scale as you add clients

  • Locking is 100% distributed across targets and clients and scales as you add nodes

  • No centralized metadata manager means no scalability bottleneck

New-generation flash media, such as NVMe and SCM, are moving the bar on storage latency. Single-digit μs (microseconds) latency is a reality when used locally. This is setting expectations for application developers, who now get much better performance from one local NVMe flash device than an entire enterprise-grade all-flash array.

But the real challenge is to share NVMe across the network, deploy NVMe at scale with the same low-latency as when used locally. This cannot be done with traditional controller-based architectures as those can only do low levels of IO processing before their bottleneck-design slows down, increases latency and eventually tops out.

New storage and networking technologies enable scale-out in-server flash architectures.

By deploying NVMe as in-server flash in distributed, shared-nothing architectures, IO processing does not suffer traditional controller limitations: as you scale devices and network bandwidth, you linearly scale performance. Applications benefit from the low-latency of NVMe in a scale-out fashion, leveraging standard networking or new-generation networking such as RDMA for ultra-low latency.
Excelero’s mission is to radically change the way digital businesses design storage infrastructures for low-latency, scale-out applications. Our goal is to enable storage managers to provide shared low-latency storage with efficiencies akin to those of the most efficient data centers.

Flexibility

  • Mix different storage media types, server types and deployment models

  • Use native NVMesh and/or NVMf transports to future-proof your environment

  • Avoid one-type fits all logical volume simplifications

  • Scale Storage and CPU independently

Efficiency

  • Enable high NVMe SSD utilization rates

  • Use standard hardware

  • No CPU Overhead

  • No forklift upgrades

  • Low management cost

Ease of Use

  • Easy to manage & monitor

  • Integration with orchestration layers for provisioning of logical volumes

  • Storage access API – block is the common building block

Performance

  • Leverage the full performance of your NVMe SSD at scale, over the network

  • True convergence to ensure deterministic application performance – no noisy neighbors

  • Leverage high IOPS, high bandwidth or mixed

Learn More About Excelero!

Excelero's Customers