GTC 2020 CWE21825
Presenters: Craig-Wittenbrink,NVIDIA; Pravnav-Marathe, NVIDIA; Rajeev-Rao, NVIDIA; Kevin-Chen, NVIDIA; Dilip Sequeira, NVIDIA
TensorRT Inference Library is most easily used by importing trained models through ONNX. In this session, we plan to go over fundamentals of the workflow to import and put into production deep learning models using TensorRT’s Parsing. We’ll discuss end-to-end solutions from training to export and import into TensorRT and deployment with TensorRT-Inference Server.
Watch this session
Join in the conversation below.