AI is redefining performance requirements for today’s data centers. Fortunately, there is no lack in technology innovation: GPUs, NVMe flash, and RDMA networking enable data scientists to process immense volumes of data. But these technologies come at a cost and need to be implemented in the most efficient way. Some data centers have GPU and NVMe utilization ratios as low as 20% and that is unacceptable.
Stéphane Maillan helps build Orange’s infrastructure for AI, including state-of-the-art GPU servers, RDMA networking and NVMe-based storage. In this webinar, he shares his experiences and challenges on this interesting journey, in which he paved the road to AI for Orange.
Stéphane concludes his session with his vision on the future of AI infrastructures.
In this webinar replay you will learn:
- Efficiently sharing GPUs
- Distributing GPU workloads
- Meeting your GPU storage requirements
- Best practices building low-latency networks