Asynchronous execution of nvinfer and video stream

orin nx 8GB,Jetpack 6.0DP,R36.2
I have two USB cameras and need to do target recognition on them at the same time, draw a frame when recognition is complete, merge the two USB camera frames together and finally RTSP.


But now the video frame rate is very low and latency is high. I think it’s the two nvinfer’s that are causing the GPU to be struggling and the nvinfer is clogging up the video stream.
I would like to know if there is any setting to make nvinfer and video streaming work asynchronously, automatically transferring the current video frame data to the next plugin when the previous frame of video inference is not completed
We do not require reasoning to be run in real time. Of course, I think tee might fit my requirements.

Hi,
In DeepStream SDK, we use nvmultistreamtiler instead of nvcompositor. You may try the plugin and see if performance is better. And do you use differenet models in nvinfer engine 1 and 2? The optimal solution is to use nvstreammux to mux the sources and then send to single nvinfer enigne.

We used two different models, so we used two nvinfer and did not use nvstreammux to multiplex the source.
The nvcompositor may not be the problem, I cancelled one of the nvinfer , and the pipeline worked fine.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.