Best practices for inference – DeepStream Technical Deep-Dive 2

DeepStream uses NVIDIA Triton and TensorRT plug-ins to help developers quickly build fully accelerated vision AI pipelines. In this technical deep-dive, we’ll go over DeepStream’s inference options, as well as how to build inference pipelines, configurations, and batching policies. Finally, we’ll explore how DeepStream’s pre-processing and post-processing plugins can be used alongside the inference options to support custom models and advanced use cases.

Join this session to learn how to:

  1. Work with DeepStream’s inference options for Tensorflow, Pytorch, and ONNX models.

  2. Work with TensorRT and DeepStream for optimized models.

  3. Use Triton server to support single or multiple DeepStream pipelines.

  4. Use DeepStream’s pre/post-processing plug-ins.

Date/Time:

  • NA/EMEA session: 8/2, 8am PT. Register now
  • APAC session: 8/3, 10am JST / KST, 11am AEST, 9am CST, 6:30 am IST. Register now
1 Like