Webinar Recording : Optimization Strategies for Deploying Self-Driving DNNs with NVIDIA TensorRT

Optimization Strategies for Deploying Self-Driving DNNs with NVIDIA TensorRT
Date : Wednesday, February 3, 2021, 9:00 AM PST | 6:00 PM PST
Webinar Link : https://developer.nvidia.com/drive/training

The tradeoffs between accuracy, complexity and resource consumption can be significant when it comes to deploying deep neural networks (DNNs) on embedded platforms. These relationships are both dependent on the model architecture and the resource configuration of the platform on which the inference pipeline is running.

This webinar is focused on how popular model architectures fit into the concept of solving tasks on NVIDIA DRIVE AGX as well as practical considerations on balancing task accuracy with compute and memory resources, using TensorRT for deployment. The focus will lie on the most generic and typically most expensive component in DNNs for solving computer vision problems: the model backbone, for example the popular ResNet-50 architecture.

Note: This is not targeted for beginner. As pre-requisites, we would recommend beginners to watch our previous webinar sessions
Webinar Recording: CUDA/TensorRT on Drive AGX