NVIDIA Webinars — Optimizing DNN Inference Using CUDA and TensorRT on NVIDIA DRIVE AGX

  • [b]Optimizing DNN Inference Using CUDA and TensorRT on NVIDIA DRIVE AGX [/b] Date: Tuesday, October 22, 2019 Time: 9:00 AM PDT/ 6:00 PM PDT Register here

    Autonomous vehicles need fast, accurate perception to perform safely. This means accelerating massive computational workloads in parallel using high-performance, energy-efficient compute platforms like NVIDIA DRIVE AGX™. The NVIDIA® CUDA® parallel computing software platform and programming model for computation on DRIVE AGX is a powerful solution for meeting these demands. Deep neural networks (DNN) running on CUDA are further optimized by NVIDIA TensorRT™, a software platform for high-performance deep learning inference.

    In this webinar, we’ll introduce CUDA cores, threads, blocks, gird, and stream and the TensorRT workflow. We’ll also cover CUDA memory management and TensorRT optimization, and how you can deploy optimized deep learning networks using TensorRT samples on NVIDIA DRIVE AGX.