How we setting then the Deepstream can achieve the highest performance

At present, we run deepstream-5.1 on the jetsontx2 platform, but we find that it takes a long time for deepstream to parse pictures,At first, some pictures take 2 to 4 seconds to complete. As the test progresses, this delay will be gradually enlarged, resulting in data delay. I doubt whether some settings are Mismatch?

Past experiences suggests that questions about NVIDIA’s embedded platforms typically receive more and/or faster answers in the sub-forums dedicated to these platforms. The sub-forum for the Jetson TX2 is here:

OK. thanks for the suggest ,i had move to Jetson TX2

Hi,
Is your source JPEGs and need to do MJPEG decoding? We have some samples and please check if your use-case is close to this:

/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-image-decode-test

And can run it to reproduce the issue.

No,bro now we face the issue is the deepstream Parsing RTSP and analyzing pictures takes too long, resulting in data delay. In other words, the performance of deepstream should not be released. Is it because some configurations of Jetson TX2 do not match?

like the image showing the image’s time stamp genrate by the deepstream is delay The actual time is about two to three seconds

Hi,
Please set sync=0 to the sink for a try.

And please run gst-launch-1.0 command as a comparison.

$ gst-launch-1.0 uridecodebin uri=rtsp://__RTSP_URI__ ! nvoverlaysink sync=0

Thanks for the response, will let team try this way to do the analysis then update here .


Hello DaneLLL, we had try the feature from your suggest , but under this way the delay still taken, seem no useful, the main problem should It takes too long to analyze pictures, such as how many people there are.

Hi,
It sounds like the bottleneck is doing inferencing in nvinfer plugin. Do you see the issue in pure video playback:

$ gst-launch-1.0 uridecodebin uri=rtsp://__RTSP_URI__ ! nvoverlaysink sync=0

Hi
yeah, we had try running the command yours,There seems to be no delay in pure video playback,so bottleneck is doing inferencing in nvinfer plugin, any suggest here ?

Hi,
Please execute sudo nvpmodel -m 0, sudo jetson_clocks and check if it improves. If it is still the same, the model can be too heavy and you would need to set interval. Or try to make the model lighter.

You may check GPU loading by running sudo tegrastats.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.