How to Use nvcompositor to Composite Output from batched nvinfer with Video

Hi,

I would like to composite in my case, 2 streams of out 4 video streams that go through nvstreammux -> nvinfer with a local video file and encode it as a MP4 video file.
2 4K + 1 1080p -> MP4

In the pipeline below, all I can do is to encode all 3 outputs as separate videos. Whenever, i use nvcompositor and redirect the stream to nvcompositor designated sinks, the pipeline just stuck and ended up with errors when i keyed Ctrl C.

pipeline failed to preroll

gst-launch-1.0 -v -e \
nvstreammux name=m batch-size=4 width=3840 height=2160 ! \
nvinfer config-file-path=$CONFIG_FILE_PATH batch-size=4 unique-id=1 ! \
nvstreamdemux name=demux \
filesrc location=$VIDEO_4K_0 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 \
filesrc location=$VIDEO_4K_1 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_1 \
filesrc location=$VIDEO_4K_2 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_2 \
filesrc location=$VIDEO_4K_3 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_3 \
filesrc location=$VIDEO_0 ! qtdemux ! h264parse ! nvv4l2decoder ! tee name=t t. ! queue ! nvegltransform ! fpsdisplaysink video-sink=nveglglessink text-overlay=false \
t. ! queue ! "video/x-raw(memory:NVMM), format=NV12" ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=RGBA" ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./1080p_1.mp4 \
demux.src_0 ! "video/x-raw(memory:NVMM), format=NV12" ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=RGBA" ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./4K_1.mp4 \
demux.src_1 ! "video/x-raw(memory:NVMM), format=NV12" ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=RGBA" ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./4K_2.mp4 \



nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=1920 sink_0::height=1080 \
sink_1::xpos=1920 sink_1::ypos=0 sink_1::width=1920 sink_1::height=1080 \
sink_2::xpos=1920 sink_2::ypos=1080 sink_1::width=1920 sink_1::height=1080 ! nvvideoconvert ! \
nvv4l2h264enc ! 'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! qtmux ! filesink location=test.mp4 \

Any ideas are very much appreciated.

Thanks.
Vincent

Hi,
You have to use nvmultistreamtiler. Please refer to a similar topic:

Hi Dane,

I have tried to use nvmultistreamtiler for such purposes but i still have some problem:

Pipeline:

gst-launch -e -v \
nvstreammux name=m2 batch-size=3 width=3840 height=2160 ! nvmultistreamtiler rows=2 columns=2 width=960 height=540 ! nvvideoconvert ! nvegltransform ! fpsdisplaysink video-sink=nveglglessink text-overlay=false sync=false \

nvstreammux name=m batch-size=4 width=3840 height=2160 ! nvinfer config-file-path=$CONFIG_FILE_PATH batch-size=4 unique-id=1 ! nvstreamdemux name=demux \

filesrc location=$VIDEO_4K_0 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 \

filesrc location=$VIDEO_4K_1 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_1 \

filesrc location=$VIDEO_4K_2 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_2 \

filesrc location=$VIDEO_4K_3 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_3 \

filesrc location=$VIDEO_0 ! qtdemux ! h264parse ! nvv4l2decoder ! tee name=t t. ! queue ! nvegltransform ! fpsdisplaysink video-sink=nveglglessink text-overlay=false \

t. ! queue ! m2.sink_0 \

demux.src_0 ! queue ! "video/x-raw(memory:NVMM), format=NV12" ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=RGBA" ! nvdsosd ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=(string)NV12, width=(int)3840, height=(int)2160" ! m2.sink_1 \

demux.src_0 ! queue ! "video/x-raw(memory:NVMM), format=NV12" ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=RGBA" ! nvdsosd ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=(string)NV12, width=(int)3840, height=(int)2160" ! m2.sink_1 \

Basically, the last two lines are problematic.

demux.src_0 ! queue ! "video/x-raw(memory:NVMM), format=NV12" ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=RGBA" ! nvdsosd ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=(string)NV12, width=(int)3840, height=(int)2160" ! m2.sink_1 \

demux.src_0 ! queue ! "video/x-raw(memory:NVMM), format=NV12" ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=RGBA" ! nvdsosd ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=(string)NV12, width=(int)3840, height=(int)2160" ! m2.sink_1 \

Can you show me how to consume the output of nvstreamdemux and make it an input to nvstreammux for video composition?

I am trying to get the following:
2 stream of videos that have gone through inference with visualization of their inference meta using nvdsosd + 1 local video composited into a single video.

Thanks,
Vincent

Hi,
Not quite sure about the usecase. You should not need nvstreamdemux and run the pipeline like:

... ! nvstreammux ! nvinfer ! nvmultistreamtiler ! ...

Hi Dane,

I am using nvstreamdemux to extract the image frame as well as inference meta after the batched nvinfer.

I am referring to https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.html#wwpID0E0BY0HA

In other words, I need 2 out of 4 streams that were submitted to nvstreammux -> nvinfer as I also want the inference meta to be visualized by nvdsosd on those 2 streams.

Thanks,
Vincent

Hi,
If you would like to demo some sources with inference, we would suggest

  1. Launch 2 separate pipelines (one with nvinfer and the other without)
  2. Write custom logic to filer metadata for sources which shouldn’t show inference output

The nvstreammux and nvstreamdemux cannot be linked to construct this usecase. You may try the two possible solutions.

Hi Dane,

Thanks a lot for your advice.
Appreciate your help.

Vincent