Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU (GTX1070)
• DeepStream Version 5.0.1-20.09-triton
• TensorRT Version 7.0.0
• NVIDIA GPU Driver Version (valid for GPU only) 460.32.03
• Issue Type( questions, new requirements, bugs) questions
During inference with the Deepstream-app I only reach 60 FPS with my TensorRT optimized Model (Mobilenetv2, 300,300) and have a GPU usage of 30%.
In my config file I changed under the part “[sink0]” the option “sync=1” to “sync=0” to get the full computing power. The FPS jumps from 30 to 60 FPS. But I discovered with the nvidia-smi tool that the GPU Utilization is just 30%.
When I change under the part “[sink0]” the option “type” from 2 (EglSink) to 1 (FakeSink) I get nearly 300FPS for the model and a GPU Utilization of 95-100%.
I tried the tips from the official documentation but no luck. (https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_troubleshooting.html)
Can someone help me to figure out where the Problem for the low GPU Utilization is?
source_1080p_dec_infer_mobilenetv2_tf.txt (4.3 KB) )
config_infer_primary_ssd.txt (3.3 KB)
Thanks