Does DeepStream for Tesla support TensorRT Inference Server (TRTIS) to run multi models of video analytics

Hi all, does TensorRT Inference Server (TRTIS) support running video analytics inference with DeepStream on Tesla?. I have read that DeepStream works with TensorRT optimized inference engines as input to run the inference with DeepStream, and I need to know if DeepStream works with TRTIS also; if so, where can I find more information about it?

Deepstream has its own tensorRT gstreamer plugin which can set gpu_id, run multi models. You can create your pipeline including camera, decoding, inference, tracker, osd, display, … easily.

https://devblogs.nvidia.com/nvidia-serves-deep-learning-inference/ TRTIS is anohter package of tensorRT, it can support python, Kubernetes for loading balance, client/server…

They are independent from each other.

Hi ChrisDing, thanks for your reply.

Could you please recommend script samples that show how to work with Deepstream and tensorRT gstreamer plugin to run multi models?

You can download the deepstream 3.0 package. Refer to sources/apps/sample_apps/deepstream-test2