Using a Gstreamer Tee element in inference pipeline

Hello everybody,

I have a simple inference pipeline for object detection on Jetson Nano. The fps of the detection is low which is not a problem by itself. However I need to display the video at original speed (with results on overlay). I thought about putting a Tee element after the source element so I get two branches: one for processing and the other for visualizing.

However, the visualization branch still runs at the fps of the processing branch. I did put a ¨queue¨ for each branch so to have different threads (as explained in the Gstreamer documentation) but it behaves as if the latency of inference is affecting the displayed video.

Ideas?

Thanks!

Hi zowllabs,

Please try the following pipeline:

gst-launch-1.0 \
nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1280,height=720,framerate=30/1' ! queue ! tee name=t \
t. ! queue leaky=2 max-size-buffers=1 ! videorate max-rate=5 ! fakesink \
t. ! queue ! nvoverlaysink sync=false

The fakesink pipeline is limiting the framerate to 5 fps but the display should still be at 30 fps thanks to the queues and the tee.

1 Like

Hi!

thank you very much! based on your answer I get to have different processing and visualization fps over the branches and I can see the original video faster than the processing fps.

Now I am trying to overlay the two results (I need real time video with an overlay that is generated at a lower fps). I am using this pipeline, with a videomixer that is in charge of the overlay. The result is a low fps video. That is, if I plug the branches to the videomixer then everything is slowed down (the videomixer is supposed to generate an output with the fps of the fastest stream)

gst-launch-1.0
uridecodebin uri=file:///home/tomas/Dev/Data/sample_1080p_h264.mp4 ! tee name=t
t. ! queue leaky=2 max-size-buffers=1 ! m.sink_0 nvstreammux name=m width=1080 height=720 batch-size=1 ! nvinfer config-file-path= infer_config.txt ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! videomixer name=mix sink_0::alpha=0.8 sink_1::alpha=0.8 ! nvvideoconvert ! nvegltransform ! nveglglessink sync = false
t. ! queue leaky=2 max-size-buffers=1 ! nvvideoconvert ! mix.

Maybe there is a different way of overlaying?

Thanks!

Hi

The problem is that videomixer will only output buffers whenever both sink pads have a buffer ready. You can solve this by using videomixer to artificially increase the frame rate of the slower pipeline with something like this:

gst-launch-1.0 videotestsrc is-live=true ! 'video/x-raw,framerate=5/1' ! videorate ! 'video/x-raw,framerate=30/1' ! perf ! fakesink

In that pipeline even though videotestsrc negotiates 5fps the framerate before the fakesink is 30fps. This is achieved in videorate by duplicating input buffers.

Note: I’m using one of our elements (perf) to measure the framerate. You can follow the instructions on this repo to install it if you are interested:

https://github.com/RidgeRun/gst-perf

Just replace the configure command with this one to install it in the correct lib folder on the Jetson Nano:

./configure --prefix /usr/ --libdir /usr/lib/aarch64-linux-gnu/

Ok I am getting closer, but still having issues.
I simplified my pipeline so to test the videorate conversion first

gst-launch-1.0 uridecodebin uri=file:///home/tomas/Dev/Data/sample_1080p_h264.mp4 ! m.sink_0 nvstreammux name=m width=1080 height=720 batch-size=1 ! videorate ! ‘video/x-raw(memory:NVMM),framerate=2/1’ ! nvinfer config-file-path=infer_config.txt ! nvvideoconvert ! nvdsosd display-clock=true ! nvegltransform ! nveglglessink sync=false

I am getting a strange behavior. The output is the video at 2fps with the overlay boxes, but the boxes are delayed in time! How is this possible? I expect nvdsosd drawing on the same buffer as that pushed by nvinfer. Why then the bounding boxes and the frame on which nvdsosd draws are shifted in time?

Thanks!

Hi,
We would suggest you try deepstream-test3 and follow this post to customize it for Jetson Nano.
https://devtalk.nvidia.com/default/topic/1058597/deepstream-sdk/-nano-deepstream-test3-app-not-working-as-expected-for-multiple-video-source/post/5368352/#5368352
By default the config is for Xavier/desktop GPU. Your usecase looks close to this sample and please take a look.

Great I didn’t know about the existence of nvoverlaysink.
So I am testing the following pipeline:

gst-launch-1.0
uridecodebin uri=file:///home/tomas/Dev/Data/sample_1080p_h264.mp4 ! queue leaky=2 max-size-buffers=1 ! m.sink_0
nvstreammux name=m width=1080 height=720 batch-size=1 ! nvinfer config-file-path=adas_infer_config.txt ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! comp.sink_0
nvcompositor name=comp sink_0::alpha=0.5 sink_1::alpha=0.5 ! nvoverlaysink sync=False
uridecodebin uri=file:///home/tomas/Dev/Data/v6.mp4 ! queue ! m2.sink_0 nvstreammux name=m2 width=1080 height=720 batch-size=1 ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! comp.sink_1

Where I want to use nvcompositor to overlay two transparent videos. One of the paths of the pipeline includes nvinfer. And the other just another video feed.

The problem is that I am not getting a smooth video. At each frame where there is an inference, the video blocks and then continues. I want to have a smooth video with an overlay on top.

Notice I added queues to make several threads, and sync=False in the nvoverlaysink. But still no luck

Thanks!!

Hi,
Please run deepstream-test3 as suggested in #6
You should use nvmultistreamtiler instead of nvcompositor. nvmultistreamtiler is implemented for DeepStream SDK usecases.
Also for multi sources, you would need to configure interval accordingly. In reference config file source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt, interval=4 is set.

Hi,

Is nvmultistreamtiler capable of overlaying two images with transparency?