Excelero delivers big data AI storage solutions for business & enterprise, big data storage solutions, and enterprise data storage solutions. Applications are for major web scale companies for data analytics, machine learning applications in media and entertainment and HPC environments. Skip to main content
EngineeringExceleroTop Stories

NVMe Is Just an Enabler!

By May 9, 2018August 13th, 2018No Comments

The much respected Dave Raffo wrote a very interesting opinion piece for SearchStorage titled “NVMe technology is but a first step toward bigger things”.

Dave’s article starts by praising NVMe technology but then takes an interesting turn as he states: ”NVMe technology is part of an evolving flash world, a step toward much more significant advances. You should look at NVMe as the beginning of a transition to storage-class memory (SCM).” Next, he provides an interesting overview of what comes after NVMe (spoiler : it’s SCM) to conclude with an overview of established flash vendors and up and coming innovators, including Excelero. We couldn’t agree more with Dave that NVMe is just an enabler.

What we see in the market today is that top-performing companies are leaders in data analytics: they have optimized infrastructures and applications to capture, store and analyze more data faster to make better business decisions. This makes storage infrastructures the backbone of next-generation analytics applications. These applications essentially have two storage requirements: faster storage and more scalable architectures. NVMe, as Dave wrote, is the first step. Software-defined storage architectures, are the bigger things.

Dave ends his article with a few interesting questions. As this article was so much our story, we decided to ask our Chief Architect Kirill Shoikhet to take a look at the questions and provide some Excelero insights:

What do you mean by NVMe-ready? Is this merely replacing SAS SSDs with NVMe SSDs, or did you make other architecture or management changes?

  • NVMe-readiness should inevitably include analyzing performance bottlenecks from scratch at the system level and change architectures as required. For example, with SAS drives having a 24-disk enclosure with multiple 12Gb SAS controllers was a common practice since the drives couldn’t reach this performance anyway; or preferring sending multiple copies over the network to writing them to a SAS drive. With NVMe drives such assumptions need to be re-examined. Bottlenecks move and the old dual controller + 1 JBOD architecture can’t compete with architectures built for NVMe.
  • The management plane changes as well since the network becomes an integral part of the shared NVMe solution with NVMf running over potentially heterogenous network infrastructures and may include vastly different performance tiers (QLC/RI NVMe drives, NVMf drives or local SCM drives…) with different quality of service requirements.

What is your plan for NVMe over Fabrics? Which fabrics will you support, and how will they affect the applications I’m running now or may run in the future?

  • NVMf poses hard questions in enterprise domain where SANs were often separate from Ethernet-based LANs and often managed by different departments. If one just moves FC SAN to run NVMf(FC) to NVMf(FC) targets that in turn translate this to NVMf(Eth/IB) implementing actual shared NVMe storage in a disaggregated or converged case, this creates a performance tax on NVMf(FC) target servers and unnecessary latency for the actual clients. On the other hand, throwing away the SAN infrastructure leads to a question of whether to maintain separate Storage & Compute network infrastructure that is essentially doubling network infrastructure (Server-to-Storage vs. Server-to-Server) or to unify them which can lead to ‘a world of pain’ from the QoS and management perspective. The former approach makes more sense for disaggregated or composable infrastructure, but is converse to more modern datacenter approaches.
  • The questions above and inconsistency in handling of NVMf between platforms (bare metal, Linux, VMW,…) as well as questions on how shared access is implemented should, at least in the short to mid-term, lead to a preference for solutions like NVMesh that provide tighter control over networking issues and are able to utilize SmartNICs to alleviate heterogenous infrastructure issues.

What is your SCM roadmap? How do you view future advances, such as 3D XPoint, Z-NANDand other emerging technologies, and how can I prepare for them?

  • One needs to be careful with the definitions here: SCM implies Memory, meaning it provides memory semantics. NVMe is a block protocol providing block semantics. SCM can be packaged in physical (e.g., Optane, zSSD) or virtual (pmem.io) block device form with an NVMe protocol access, but when we talk about SCM we need to clearly distinguish between two issues:
  1. How one’s architecture is ready for SCM-based block devices from the perspective of fully-using the huge latency benefits of such devices. The vendor should show that the additional latency incurred by their stack does not detract from these benefits. NVMesh’s overhead is small enough to clearly carry the performance benefit of SCM-based NVMe devices while providing sharing functionality and storage services, such as data protection.
  2. SCM will mean that more and more applications can run in-memory without the need for underlying storage in the main data path. But for HA they will need a shared lower-cost storage with the highest possible write bandwidth and consistent synchronous low latency and this is where shared, low-latency architectures like NVMesh will again shine.
Yaniv Romem

Author Yaniv Romem

More posts by Yaniv Romem