how could i use nvstreammux without nvstreamdemux

Hi!

i modified deepstream-test1 sample code want to achieve two h264 format videos send to nvstreammux without nvstreamdemux
, because when i use nvstreamdemux i have to create 2 pipeline after nvstreammux , which is not reasonable .
can you give me some advice please? and how to distinguish videos in osd_sink_pad(nvosd).

here are original way
gst_element_link_many (source1, h264parser, decoder,NULL);
gst_element_link_many (source2, h264parser2, decoder2, NULL);
gst_element_link_many (nvstreammux, nvstreamdemux, NULL);
gst_element_link_many ( pgie, filter1, nvvidconv, filter2, nvosd, sink, NULL);
gst_element_link_many ( pgie2, filter3, nvvidconv2, filter4, nvosd2, sink2, NULL);

Hi zhangchao:
What do you want to do, Can you elaborate?

If you want to input two stream then inference, you can refer to following pipeline:

gst-launch-1.0 filesrc location=VIDEO0026.mp4 ! qtdemux ! h264parse ! nvdec_h264 ! m.sink_0 nvstreammux name=m batching-method=1 batch-size=2 ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=(string)RGBA’ ! nvmultistreamtiler rows=2 columns=1 width=1920 height=1080 ! nveglglessink filesrc location=VIDEO0025.mp4 ! qtdemux ! h264parse ! nvdec_h264 ! m.sink_1

Thanks
wayne zhu

Hi waynezhu:

sudo gst-launch-1.0 nvstreammux name=mux batch-size=2 ! nvstreamdemux name=demux
filesrc location=./sample_720p.h264 ! h264parse ! nvdec_h264 ! queue ! mux.sink_0
filesrc location=./sample_720p2.h264 ! h264parse ! nvdec_h264 ! queue ! mux.sink_1
demux.src_0 ! nvinfer config-file-path=./dstest1_pgie_config.txt ! nvvidconv ! “video/x-raw(memory:NVMM), format=NV12” ! nvvidconv ! “video/x-raw(memory:NVMM), format=RGBA” ! nvosd font-size=15 ! nveglglessink
demux.src_1 ! nvinfer config-file-path=./dstest1_pgie_config.txt ! nvvidconv ! “video/x-raw(memory:NVMM), format=NV12” ! nvvidconv ! “video/x-raw(memory:NVMM), format=RGBA” ! nvosd font-size=15 ! nveglglessink

above is pipeline, besides i use c code based on deepstream_sample(DeepStream_Release/sources/apps/deepstream-test1) not gst-launch.
bellow is my pipeline

src_1(h264) – h264parse – nvdec_h264
--------------------------------------------------> – nvstreammux – nvinfer – … – nvosd
src_2(h264) – h264parse – nvdec_h264

and if i use nvstreamdemux,as you can see, after demux i have to create 2 nvinfer , 2 nvosd…,
which is not reasonable .

so , if i don’t use nvdemux , how to accomplish my pipeline , if nvinfer element support two video streams ,and how to distinguish stream after nvstreammux.

thanks very much !

Hi Zhaochong,
I can’t understand your pipeline.

Why do you use nvstreammux if you want to input to nvinfer separately?

Thanks
wayne zhu

Why we have nvstreamMux? -> Support batch.
“nvinfer (detection, classification)” “nvtracker” all support batch handling.But nvosd not.

streamMux collects all channel decoded data(NV12) to its contiguous buffer and unified to be the same resolution. (These buffers are allocated in the beginning, it is reused when the pipeline is running, it is freed when the pipeline is being destroyed.) The following elements like nvinfer, nvtracker, … all get buffers from streammux as input and update metadata which is attached to the buffer.

nvosd, encoder, … do not support batch handling. So it needs streamDemux (or streamTiler) to split the different channel buffers.

So for your pipeline, you just need one nvinfer , one nvosd. One nvosd instance can handle multiple channels. And there should be have streamDemux or streamTiler before nvosd.

Hi zhangchao,

Just as we talk, you can refer to pipeline in comment2.

If you want to output encode, pls use nvstreamDemux , if you want to display, pls use nvstreamTiler before output.

Thanks
wayne zhu

This can work.

gst-launch-1.0 nvstreammux name=mux batch-size=2 ! nvinfer config-file-path=./dstest1_pgie_config.txt ! nvstreamdemux name=demux
filesrc location=./sample_720p.h264 ! h264parse ! nvdec_h264 ! queue ! mux.sink_0
filesrc location=./sample_720p2.h264 ! h264parse ! nvdec_h264 ! queue ! mux.sink_1
demux.src_0 ! “video/x-raw(memory:NVMM), format=NV12” ! queue ! nvvidconv ! “video/x-raw(memory:NVMM), format=RGBA” ! nvosd font-size=15 ! nvvidconv ! “video/x-raw, format=RGBA” ! videoconvert ! “video/x-raw, format=NV12” ! x264enc ! qtmux ! filesink location=./out3.mp4
demux.src_1 ! “video/x-raw(memory:NVMM), format=NV12” ! queue ! nvvidconv ! “video/x-raw(memory:NVMM), format=RGBA” ! nvosd font-size=15 ! nvvidconv ! “video/x-raw, format=RGBA” ! videoconvert ! “video/x-raw, format=NV12” ! x264enc ! qtmux ! filesink location=./out4.mp4

If the files are same format, you can read deepstream_app.c to do something.
But if not same。。。

what do you mean ?

hi,i tried it to test demux, with inference part removed. But it doesn’t work, print:

Settint pipeline to paused
pipeline is prerolling
redistribute latency
redistribute latency

and it was stuck here.

Interrupt: stopping pipeline
error: pipeline doesn't want to preroll
setting pipeline to null

when i removed one sink branch, it throwed some warnings, but worked:

gstreamer-critical gst-pad-push-event: assertion gst_is pad failed
gstreamer-critical gst-pad-peer-query: assertion gst_is pad failed

in addition, i cannot set mux’s batch-size, it print:

warning: errorneous pipeline: could not set property "batch-size" in element "mux" to 2

whatever number i input, it said i cannot set property to it

can you help me?

Hi, I need to get image buffer and inferring result from the stream with nvstreammux, how can I apply that? Adding probe in nvosd pad, and mapping from gstbuffer, mapinfo->data I get only contains 8 bytes data.
Is appsink worked? I haven’t tried that.
Thanks.

Reply #11

Get image buffer: Can you refer to “source/gst-plugins/gst-dsexample/gstdsexample.cpp” -> “gst_dsexample_transform_ip()” -> " surface = *((NvBufSurface **) in_map_info.data);"

I’ve tried that, but surface is wrong. Accessing [surface->size,width,height] causes an segmentation fault(core dumped). I print address as following:
in_map_info.data: 0x7f97ad7e5fd0
surface: 0x7a7a7a7a7a7a7a7a
surface.data: 0x7a7a7a7a7a7a7a9a

In pipeline I add video-converter before appsink for file-save using. Is it the vidconv causing this error?

In addition, the html documentation about surface has a mistake in example:
nvbuf = ((NvBufSurface)info.data)