How can solve deepstream inference slowly

Please provide complete information as applicable to your setup.

**• Hardware Platform Jetson xavier nx
**• DeepStream Version 6.0.1
**• JetPack Version (valid for Jetson only) 4.6
Hello, when I use the model trained with yolov4 in TAO training tool with custom data, deploy inference in jeston xavier NX deepstream6.0.1, the inference speed is slow. The following file is the delay speed of each plug-in when I reasoned, and the decoding part has the highest delay, reaching 2400ms. Can you help me solve it? I look forward to your reply
problem.txt (5.1 KB)

what is your media pipeline, video type, configuration file? which deepstream app are you testing?
could you provide the whole logs? and please provide simple code to reproduce.

Thanks for your reply, is there a possibility that the model inference affects the decoding speed

they all use GPU resource, but no evidence that inference will limit decoding.
to narrow down this issue, you can only test decoding, what is the video resolution? how many input streams?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.