Kubernetes has become the de-facto tool for orchestrating containerized workloads. And AI workloads are no different. Built to provide isolated environments and simplify reproducibility and portability, it’s an obvious choice for data science, and an ecosystem of data science tools has been built around containers and K8s.
Kubernetes provides the right abstractions to manage distributed applications across multiple cloud footprints, and many of these features make it an attractive platform for machine learning systems as well. However, many organizations get caught up in the gulf between making an abstract argument that Kubernetes can be great for machine learning and actually realizing those benefits in the real world. Will plans to show how Kubernetes can support the entire machine learning lifecycle, from provisioning self-service discovery environments to monitoring ML systems in production and introduce you to some key community projects that make this easy and fun.
In this webinar replay you will learn:
- Fixes for supporting research environments with Kubernetes
- How to meet the needs of research experimentation with an orchestrator built for services
- How IT departments can easily incorporate Kubernetes into their workflows
- The benefits of using NVMesh as a back-end for Kubernetes
Omri Geller, CEO and co-founder of Run:AI
Gil Vitzinger, Software Engineer in the Management team at Excelero
William Benton, Engineering Manager at RedHat