I am running deep stream on Triton inference server in docker. How can I control using nvinfer or nvinferserver to do the inference when running the deep stream? For example, the objectDetector_SSD from the source file is using TensorRT or just using Triton inference server.
**• NVIDIA T4
**• DeepStream Version:5.0
**• Triton inference server on docker
Triton (inference server) provides more usability for multiple model types like pytorch, tensorflow, tensorRT, etc.
But nvdsinfer (embedding tensorRT) focuses more on optimizations of memory usage and running speed.
So, which DS plugin you use for inference depends on what you need or what your purpose is.
In my opinion, triton is recommended during middle stages of developments (demos, prototypes, etc), nvdsinfer may be better for final product deployments.
Hi I’m newbies here and would like to know How you run AI inference benchmark for T4 on system? I’m currently install T4 with Intel processor (Ubuntu 20.04LTS) and I can run MLPerf with container provided from documentation but how can further run the AI Inference (trtexec) on system? is it using container as well and Deepsteram?