How to maintain the GPU usage for a DeepStream app?

We use DeepStream to do live multi-cameras inferences with one PGIE and one SGIE.

In another post, I learnt that TensorRT will utilize as much GPU resources as possible, even to the extend of 99-100%. However, in order to guarantee the product performance stability and safety, we need to control the GPU usage around a reasonable limit such as 85%. What is the recommanded way to achieve that?

Since TensorRT is close-sourced, I tried to change camera input frequency using parameters camera-fps-n and camera-fps-d, but it said “incorrect camera parameters provided”. The camera output frame rate is fixed by its firmware.

My setup is the following:

Jetson Xavier
DeepStream 5.0
JetPack 4.4
TensorRT 7.1.3
NVIDIA GPU Driver Version 10.2

Hi,

Please noted that we have a newer package released.
It’s recommended to upgrade your device to Deepstream 6.0.1 with JetPack 4.6.1 first.

May I know the fps your pipeline can reach?
In general, Deepstream will keep the pipeline to 30fps so you won’t use all the GPU resources.
Do you also see this in your environment?

Thanks.

Currently the pipeline reaches 6-7FPS, and we’d like to increase it to over 10 FPS.

Hi,

You will need to decrease the fps to get a lower GPU workload.
We can change the pipeline fps by adjusting the camera-fps-n property in the source component.

For example, in source1_csi_dec_infer_resnet_int8.txt:

...
[source0]
...
camera-fps-n=10
camera-fps-d=1
...

Based on your question, maybe you can change the interval value to lower the jobs for inference.
This property will decide how often to launch an inference task.
Setting it to a larger value will help to release the GPU workload.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.