Does DeepStream support TensorRT Inference Server (TRTIS) to run multi models of video analytics

Hi all, does TensorRT Inference Server (TRTIS) support running video analytics inference with DeepStream?. I read that DeepStream works with TensorRT optimized inference engines as input to run the inference with DeepStream, and I need to know if DeepStream works with TRTIS also; if so, where can I find more information about?

Hi,
No, Tensor-RT Inference server is an application based on TRT.
So it is independent on DeepStream SDK.

Hi DaneLLL, thanks for your reply.

Can DeepStream SDK run multiple models (and/or multiple instances of the same model) on multiple GPUs to increase throughput like TRTIS, does it support some kind of distributed single-node/multi-accelerator as well as multi-node/multi-accelerator configurations to run the inference and maximize the GPU utilization of the cards available in the system?

Hi,
We have [Application Architecture] in document. Please check, compare and let us know the deviation between our default pipeline and your pipeline. We need the information for further check.