GTC 2020: Inter-GPU Communication with NCCL

GTC 2020 CWE21698
Presenters: Sylvain Jeaugey,NVIDIA; Sreeram Potluri, NVIDIA; Ke Wen, NVIDIA; Anton Korzh, NVIDIA; Nathan Luehr, NVIDIA
Abstract
NCCL (NVIDIA Collective Communication Library) optimizes inter-GPU communication on PCI, NVIDIA NVLink, and Infiniband, powering large-scale training for most DL frameworks, including Tensorflow, PyTorch, MXNet, and Chainer. Come discuss NCCL’s performance, features, and latest advances.

Connect directly with NVIDIA Experts to get answers to all of your questions on GPU programming and code optimization, share your experience, and get guidance on how to achieve maximum performance on NVIDIA’s platform.

Watch this session
Join in the conversation below.