Pipeline is slower and drops frames if first nvinfer plugin has interval non zero value

I have 2 nvinfer plugins in a pipeline like below.

v4l2src ! videoconvert ! nvvideoconvert ! nvinfer0 ! nvinfer1 ! nvvideoconvert ! nvegltransform ! nveglglessink

nvinfer0 has interval=0 and nvinfer1 has interval=2 and everything is fine no frames are dropped. when i move nvinfer1 before nvinfer0 like

v4l2src ! videoconvert ! nvvideoconvert ! nvinfer1 ! nvinfer0 ! nvvideoconvert ! nvegltransform ! nveglglessink

the pipeline drops frames in the above case and also becomes slower. The frame drops also take place if I change the interval of the nvinfer0 in the first pipeline.
To summarize there are frames being dropped in a pipeline and pipeline becomes slower if there are 2 inferences and the interval of the first inference is not zero.

I have read the property of interval from nvinfer plugin documentation and it mentions that only inference would be skipped for the frames and the gst buffer would not be affected. what explains this outcome?

• Hardware Platform (Jetson / GPU) Jetson Nano 4gb Dev Kit
• DeepStream Version 5.1
**• JetPack Version (valid for Jetson only)**4.5.1 rev1
• TensorRT Version7.3.1

Do you have same observation with any samples?


C/C++ Sample Apps Source Details — DeepStream 6.0 Release documentation (nvidia.com)

yes i have seen it but didn’t find anything similar.

Is it fine with Sample test application 2?

sample test application 2 has no interval property in its classifiers or its set to 0 by default so it would work fine my problem occurs only when there is interval != 0. but anyway ill run jt and see by altering the interval.

I found the mistake in my implementation. I had used the same gie-unique-id for both the inference plugins which caused a conflict i believe, also changed process-mode property according to my implementation and it worked.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.