How can I get event and image information from the deepstream pipeline?

I am making the following pipeline.

[appsrc0…appsrc(n)]-[nvstreammux]-[pgie]-[nvtracker]-[sgie]-[nvmultistreamtiler]-[nvosd]-[appsink]

There was no abnormal termination when the pipeline was run.

I tried to check the event information by connecting the probe to the sink pad in nvosd plugin.

When I added RGBA image to appsrc0 and appsrc1 respectively,

I found that the NvDsFrameMeta::frame_num value goes up

and the NvDsFrameMeta::stream_id value changes to 0 and 1 differently.

But I could not get meta information related to the event.

I took a screenshot using the GstBuffer value on the nvstreammux sink_0 pad

to see if I entered the image information incorrectly. There was no abnormality.

However, at the appsink stage I received the image resolution information,

but I found that the actual image data is strange.

When I created a pipeline using filesrc instead of appsrc like this,

there was no problem in taking a screenshot from appsink.

[filesrc]-[decodebin]-[nvidia plugins(including nvstreammux)]-[appsink]

Through some tests I have come to the conclusion that

if you add RGBA data to nvstreammux using appsrc,

you will not be able to receive image data at a later stage.

I want to know how I can get a way to get images and event information for each channel.

Please advise if you have any unusual points.

Why you have to use appsrc to input rgba data for the pipeline, why not use decoder?

decodebin outputs NV12 to streammux.

Thank you for your interest in my article.

The data added to appsrc is RGBA image data decoded from another pipeline.

I am using a separate pipeline for source level decoding.

The attached png file is a pipeline for image processing.

Initially I connected the decoder directly to nvstreammux as in the deepstream-app example.

However, if I connected the decoder to an nvstream mux on multiple channels,

I could not find a way to restart the source for an EOS message on a particular channel.

So I chose to decode the source in a separate pipeline and add the resulting RGBA data to appsrc,

which belongs to another pipeline with nvidia plugins.

The problem with no image data in appsink has been resolved

by adding a GPU memory change plugin (nvvidconv & capsfilter) between appsrc and nvstreammux.

Now the event information of the nvosd sink looks strange, so I am trying to figure out the cause.

Is it possible to restart a particular channel for EOS processing or reconnection

while running on a multichannel source (file or rtsp) connected to nvstreammux?

Is there any example related to it?

Current metadata and image data are strange.

I connected the probe to the nvosd sink pad

according to the method presented in the deepstream-test1 example.

And we are investigating frame meta data with the following routine.


GstMeta* pMeta = NULL;
while ((pMeta = gst_buffer_iterate_meta(pBuf, &pState)) != NULL) {
if (gst_meta_api_type_has_tag(pMeta->info->api, _nvdsmeta_quark) == false) {
continue;
}
pDSMeta = (NvDsMeta*)pMeta;
if (pDSMeta->meta_type == NVDS_META_FRAME_INFO) {
pFrameMeta = (NvDsFrameMeta *)pDSMeta->meta_data;
if (pFrameMeta == NULL) { break; } // wrong data

  // Data survey location

}
}

I am investigating the NvDsFrameMeta data.

The result is that “stream_id” is the channel index and “frame_num” is the frame index.

There are three problems identified.
a. If the source is a file, the “frame_num” information is not incremented.
b. When two or more rtsp sources are connected, “frame_num” information is output redundantly.
(Probe was set only once in nvosd in the pipeline.)
c. When you connect multiple channels, you will see one image of each channel in the appsink.

Please let me know if you have any known problems or doubts.

I found something strange about the file source being missing frame_num information.

Here are some of my log information:

[FRAME_META] stream_id(1) frame_num(42) gie(type,batch_size,id)=(1,4,1) num(rects,strings)=(0,0) batch_id(0) nvosd_mode(CPU)
[FRAME_META] stream_id(0) frame_num(0) gie(type,batch_size,id)=(1,0,1) num(rects,strings)=(14,14) batch_id(0) nvosd_mode(GPU)
[FRAME_META] stream_id(1) frame_num(43) gie(type,batch_size,id)=(1,4,1) num(rects,strings)=(0,0) batch_id(0) nvosd_mode(CPU)
[FRAME_META] stream_id(0) frame_num(0) gie(type,batch_size,id)=(1,0,1) num(rects,strings)=(13,13) batch_id(0) nvosd_mode(GPU)
[FRAME_META] stream_id(1) frame_num(44) gie(type,batch_size,id)=(1,4,1) num(rects,strings)=(0,0) batch_id(0) nvosd_mode(CPU)
[FRAME_META] stream_id(0) frame_num(0) gie(type,batch_size,id)=(1,0,1) num(rects,strings)=(12,12) batch_id(0) nvosd_mode(GPU)
[FRAME_META] stream_id(1) frame_num(45) gie(type,batch_size,id)=(1,4,1) num(rects,strings)=(0,0) batch_id(0) nvosd_mode(CPU)
[FRAME_META] stream_id(0) frame_num(0) gie(type,batch_size,id)=(1,0,1) num(rects,strings)=(12,12) batch_id(0) nvosd_mode(GPU)
[FRAME_META] stream_id(1) frame_num(46) gie(type,batch_size,id)=(1,4,1) num(rects,strings)=(0,0) batch_id(0) nvosd_mode(CPU)

A stream_id of 0 is the file source and 1 is the rtsp source.

The noticeable difference between the file source and the rtsp source is the nvosd_mode information.

Is the nvosd_mode setting related to the problem? If so, which value do you need to modify?

I attach the currently implemented pipeline and log information.

log_frame_meta.txt (2.74 KB)
src_pipe(file).png



log_gstreamer.txt (254 KB)

I think I misunderstood it.

The following conclusions are based on logs.

a. If no event occurs, frame_num value is output and nvosd_mode value is CPU mode.
b. When an event occurs, nvosd_mode is in GPU mode,
and the num_rects and num_strings values ​​are equal to the number of events.

Is that right?

In the current configuration, “appsink” results in a single image with all channel images combined.

Is it possible to get a video image for each channel

by adding “nvstream demux” plugin instead of “nvmultistreamtiler”?

Note Configuration:
appsrc(n) - nvstreammux - pgie - nvtracker - sgie - nvstreamdemux - nvosd(n) - appsink(n)

I tried nvstreamdemux but it does not work.

The pipeline has confirmed that it is a normal connection.

The log did not find any other errors, but it seems to stop at the last appsink step.

If something is wrongly implemented,

If you have a way to import images for each channel using nvmultistreamtiler,

please let me know.

This can work for your reference:
gst-launch-1.0 nvstreammux name=mux batch-size=2 ! nvinfer full-frame=1 unique-id=1 config-file-path=./example_primary_detector.txt ! nvstreamdemux name=demux
uridecodebin uri=file:////workspace/DeepStream_Release/samples/streams/sample_720p_0.h264 ! queue ! mux.sink_0
uridecodebin uri=file:////workspace/DeepStream_Release/samples/streams/sample_720p_1.h264 ! queue ! mux.sink_1
demux.src_0 ! “video/x-raw(memory:NVMM), format=NV12” ! queue ! nvvidconv ! “video/x-raw(memory:NVMM), format=RGBA” ! nvosd font-size=15 ! nvvidconv ! “video/x-raw, format=RGBA” ! videoconvert ! “video/x-raw, format=NV12” ! x264enc ! qtmux ! filesink location=./out3.mp4
demux.src_1 ! “video/x-raw(memory:NVMM), format=NV12” ! queue ! nvvidconv ! “video/x-raw(memory:NVMM), format=RGBA” ! nvosd font-size=15 ! nvvidconv ! “video/x-raw, format=RGBA” ! videoconvert ! “video/x-raw, format=NV12” ! x264enc ! qtmux ! filesink location=./out4.mp4

Thank you for your interest in my article.

The command line you gave me had some option adjustments,

but I found that it works well in my environment.

I have compared the running pipeline image to the problem pipeline,

but I can not find any other issues except the differences between filesrc and appsrc.

I’ve added more plugins like “CapsFilter” and changed the format from RGBA to NV12,

but the issue is not resolved.

One suspicious thing is that there is only one “current-level-times” entry

in the “queue” attribute after “nvdemux” in the problem pipeline.

Therefore, when manually adding image data to more than one appsrc,

I doubt that “nvstreamdemux” will not be able to distinguish between these channels.

I tried to check the NvStreamMeta::stream_id by connecting the probe to the “nvstreammux” src pad,

but I could not access it even though I used the same routine as nvosd.

Please advise me if I have a mistake.