GTC 2020: Accelerating Recommender System Training and Inference on NVIDIA GPUs

GTC 2020 CWE21747
Presenters: Even Oldridge,NVIDIA; Alec Gunny & Akshay Subramaniam, NVIDIA; Onur Yilmaz & Chirayu Garg, NVIDIA; Pawel Morkisz & Minseok Lee, NVIDIA; Lukasz Mazurek & Scott LeGrand, NVIDIA; Paulius Micikevicius & Levs Dolgovs, NVIDIA
Abstract
Come and learn about how you can use NVIDIA technologies to accelerate your recommender system training and inference pipelines. We’ve been doing some ground-breaking work on optimizing performance for many stages of recommender system, including ETL of tabular data, training with terabyte-size embeddings for CTR models on multiple nodes, low-latency inference for Wide & Deep, and more. Running on NVIDIA GPUs, many of these are more than an order of magnitude faster than conventional CPU implementations. We’d be thrilled to learn from you how these accelerated components may apply to your setup and, if not, what’s missing. We’d also like to hear the roles recommenders play in your products, the types of systems you’re building, and the challenges you face. This session is ideal for data scientists and engineers who are responsible for developing, deploying, and scaling their recommender pipelines. Please join us for what’s sure to be an interesting series of discussions.

Watch this session
Join in the conversation below.