The essence of data-driven sciences such as Deep learning, is the ability to process as much collected and simulated data as possible in the shortest amount of time. GPUs have become the go-to compute resources behind training workloads and NVMe flash has become the standard for high-performance, low latency storage. By providing GPUs with direct access to an elastic pool of NVMe, data scientists and HPC researchers can feed far more data to the applications.

For this second webinar in our 2020 AI webinar series, we invited no less than three speakers:

Jacci Cenci is a Sr. Technical Marketing Engineer at NVIDIA. As an NVIDIA Certified Deep Learning Instructor (DLI) she is responsible for driving reference architecture development with a diverse set of infrastructure partners.

Dimitrios Emmanoulopoulos is Lead Data Scientist, Applied R&D at Barclays, and has years of hands-on AI, GPU, and NVMe storage experience. He came to share his experience building a full hardware and software stack for financial AI business use cases.

Our own Sven Breuner joined to share some exciting results of performance tests he and his team ran on NVIDIA DGX servers and Supermicro BigTwins with NVMesh Elastic NVMe.

In this webinar replay you will learn more about:

• Common challenges and pitfalls when deploying storage at scale for deep learning
• Best practices to meet the unique demands of GPU-accelerated AI and predictive analytics workloads
• How to accelerate your deep learning training workloads with Elastic NVMe