DGX Technical Sessions

As our valued NVIDIA DGX customer, we’re giving you direct access to our best practices and AI expertise through a series of live technical sessions. Get answers to your questions about DGX systems, with topics ranging from planning to deployment to ongoing optimization. The sessions are led by NVIDIA DGXperts — AI-fluent professionals who have deployed thousands of DGX systems like yours. These sessions are exclusive to DGX users, and registration links to upcoming sessions will be posted here.

Make sure you get communications on future sessions by signing up for an account on our enterprise support portal. Your NVIDIA enterprise account manager can easily add you to our portal or you can contact us dgx_info@nvidia.com


Due to overwhelming interest in the new Multi-Instance GPU (MIG) feature of DGX A100, we had three sessions that explored it in more detail. DGX A100 with MIG enables your team to support more AI workloads, right-size resources for every job, and increase overall system utilization. Check out the replay here:

MIG Technical Series (Part 1 of 3): Overview of MIG on DGX
MIG Technical Series (Part 2 of 3): MIG Use and Configuration on DGX
MIG Technical Series (Part 3 of 3): MIG in a Cluster

Make sure you all sign up for our future monthly sessions! We’d love to see you there!

1 Like

Join us for our next live technical session is Friday, June 23 at 8am PT. Register here . Note: Registration is required and will only be approved for customers that have an active support contract.

Enterprise MLOps 101 on DGX Systems
The boom in AI has seen a rising demand for better AI infrastructure — both in the compute hardware layer and AI framework optimizations that make optimal use of accelerated compute. Unfortunately, organizations often overlook the critical importance of a middle tier: infrastructure software that standardizes the machine learning (ML) life cycle, adding a common platform for teams of data scientists and researchers to standardize their approach and eliminate distracting DevOps work. This process of building the ML life cycle is known as MLOps, with end-to-end platforms being built to automate and standardize repeatable manual processes. Although dozens of MLOps solutions exist, adopting them can be confusing and cumbersome. What should you consider when employing MLOps on DGX systems? How can you build a robust MLOps practice? Join us as we dive into this emerging, exciting, and critically important space.

If you are a DGX customer and not receiving the invitations, email dgx_info@nvidia.com.​