DeepStream uses NVIDIA Triton and TensorRT plug-ins to help developers quickly build fully accelerated vision AI pipelines. In this technical deep-dive, we’ll go over DeepStream’s inference options, as well as how to build inference pipelines, configurations, and batching policies. Finally, we’ll explore how DeepStream’s pre-processing and post-processing plugins can be used alongside the inference options to support custom models and advanced use cases.
Join this session to learn how to:
Work with DeepStream’s inference options for Tensorflow, Pytorch, and ONNX models.
Work with TensorRT and DeepStream for optimized models.
Use Triton server to support single or multiple DeepStream pipelines.
Use DeepStream’s pre/post-processing plug-ins.