Deepstream Triton server and trafficcamnet

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1.1
• TensorRT Version Latest for ngc deepstream image
• NVIDIA GPU Driver Version (valid for GPU only) 525
• Issue Type( questions, new requirements, bugs) Questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Hello, I’m building an object detection pipeline where I want to deploy trafficcamnet as the main inference engine. I have been able to turn the .etlt file to a .engine file with the nvinfer plugin from deepstream, and then I’m able to load that .engine file in triton server. However, I can’t get perf_analyzer to use more than 25% of the GPU, making my RTX 3080 go at around 600fps.

I would love to be able to use 100% of the GPU so that I can infer more fps since the requirements for the project involve a lot of cameras and I want to minimize the use of hardware.

I also wanted to make the transformation outside of deepstream, since I believe that trtexec transformations have worked for me way better in the past, but most of the documentation I can find for tao models is for TAO 3.22 and in the TAO 4.0 everything is completely different, so any help regarding this would be much appreciated!

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

you can build and run the inference with higher batch size

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.