• Hardware Platform (Jetson / GPU)
x86-64 Ubuntu 20.04 LTS machine with Geforce GTX 3060
• DeepStream Version
6.1
• TensorRT Version
8.2.5.1
• NVIDIA GPU Driver Version (valid for GPU only)
515.48.07
• Issue Type( questions, new requirements, bugs)
I pulled 50 local videos and found that the consumption of GPU is very high and the frame rate is very low, and I try to drop frame and change the streammux width and height from 19201080 to 640640, but there is no change in the amount of GPU used. I changed the model from yolov5s to yolov5n, and the model parameter amount decreased from 17.8 to 5.7, but the GPU only decreased a little. Why? How can I reduce the GPU usage and improve the frame rate
What’s the pipeline you are using?
deepstream-app
how cam I measure the GPU usage of every single module, such as inference group and decode group, which can help to find out the reason of high usage of GPU and low frame rate
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
Thanks
-
can you use trtexec to benchmark your model to check how many FPS it can run on this GPU
trtexec - Developer Guide :: NVIDIA Deep Learning TensorRT Documentation
You can run decoding only and check the GPU usage, then run inference only to check the GPU usage
by pipeline, not mean the application, but the components and data flow
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.