DeepStream 5.0 SmartRecord; Recording from multiple source in a same time

• Hardware Platform (Jetson / GPU)
Jetson Nano
• DeepStream Version
5.0
• JetPack Version (valid for Jetson only)
R32 (release), REVISION: 5.1, GCID: 27362550, BOARD: t210ref, EABI: aarch64, DATE: Wed May 19 18:07:59 UTC 2021
• questions
I want to stream multiple RTSP sources and able to run recording more than one at the same time based on event. From what i try is NvDsSRCreate() always return 0 for session ID. And if i start when a recording is on, NvDsSRStart not return NVDSSR_STATUS_OK. So i think i can’t record more than one at the same time using only 1 NvDsSRContext. Do i have to make the context according to the number of sources? If so how is the pipeline gonna be? Or am i missing something?

Thanks

Maybe below pipeline is a best practice for multiple NvDsSRContext for each source ?

source → do others → tee → queue1 → recordbin1
_____________________| → queue2 → recordbin2
_____________________| → queue3 → recordbin3
_____________________| → queueN → recordbinN

Yes. One smart record bin can only support 1 recording.

Yes. You need multiple recording bins for multiple recording files.

Thank you for the reply.

I have another question, why NvDsSRStop give a GST_MESSAGE_EOS signal to gststream bus call? Is there a way to make NvDsSRStop not doing it? Because if i don’t bother the signal, turns out fine but always giving below info:

End of stream
Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261

It is like the stream restarting.

GST_MESSAGE_EOS signal is necessary for the sink to know the recording finished. It is a gstreamer basic signal. Please refere to gstreamer document. https://gstreamer.freedesktop.org/

Thank you for the reply.

I was doing like you said it need multiple recording bins for multiple recording files. The pipeline is below

source1 → streammux ->do others → tee → queue1 → recordbin1
source2 → __________________________| → queue2 → recordbin2
source3 → __________________________| → queue3 → recordbin3
sourceN-> __________________________| → queueN → recordbinN

It works, but if i start more than 1 recordbin the result video is messy (somehow combine). My hypothesis is recordbin only enabling a flag that it will record every frame that come into recordbin source. Is this true?
So if i want to record multiple source at the same time, i need multiple pipeline for each source (can’t use streammux), like below pipelines:

source1 → do something → recordbin1
source2 → do something → recordbin2
source3 → do something → recordbin3
source4 → do something → recordbin4

Please advise, thanks!

Can you elaborate your complete pipeline especially for the “do others” part?

Hi thanks for the reply.

Above is the full pipeline that i currently run. In a nutshell, i want to start recording based on the stream source based on simple event (frame number modulo; just for a proof of concept). But after this simple pipeline, i will create a custom motion detection plugins for the event.

Thanks

For more info

What i mean by combine is like this result: stream_0\^J_00003_20210714-145922_11736.mp4 - Google Drive . It is a two rtsp stream source running that somehow “combine”.

This is the snippet code from important parts:

typedef struct {
	NvDsSRContext *data[5];
} DataContainer_t;

tiler_src_pad_buffer_probe {
    ....


    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
		 l_frame = l_frame->next)
	{
		NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);

		double tmp = (double)frame_meta->frame_num / 100;
		double tmp2 = floor(tmp);
		
		int camSrc = frame_meta->source_id;
		// If it is already rounded number, then it is changing state
		if ( tmp2 == tmp ) {
			DataContainer_t *container = (DataContainer_t *) u_data;
			NvDsSRSessionId sessId;

			if (container->data[camSrc]->recordOn) {
				g_print ("-> Record from stream %d stop\n", camSrc);
				if (NvDsSRStop (container->data[camSrc], 0) != NVDSSR_STATUS_OK){
					g_printerr ("-> Unable to stop recording\n");
				}
			} else {
				g_print ("-> Record from stream %d start\n", camSrc);
				if (NvDsSRStart (container->data[camSrc], &sessId, (guint)START_TIME, (guint)SMART_REC_DURATION,
						NULL) != NVDSSR_STATUS_OK) {
					g_printerr ("-> Unable to start recording\n");
				}
			}
		}

		g_print("Source = %d | Frame Number = %d %f\n",
				camSrc, frame_meta->frame_num, tmp);
	}

    ....
}

main {
    DataContainer_t container;
    NvDsSRInitParams params = {0};

    params.containerType = SMART_REC_CONTAINER;
    params.videoCacheSize = VIDEO_CACHE_SIZE;
    params.defaultDuration = SMART_REC_DEFAULT_DURATION;
    params.callback = smart_record_callback;
    params.dirpath = "result/";

    for (int i = 0; i < num_sources; i++) {
        g_print("Setup stream_%d pipeline\n", i);
        char tmp[11];
        snprintf(tmp, 11, "stream_%d\n", i);
        params.fileNamePrefix = (gchar*)tmp;
        
        if (NvDsSRCreate (&container.data[i], &params) != NVDSSR_STATUS_OK) {
            g_printerr ("Failed to create smart record bin for stream %d\n", i);
            return -1;
        }
        
        gst_bin_add(GST_BIN(pipeline), container.data[i]->recordbin);
        if ( !gst_element_link(tee_recordbin, container.data[i]->recordbin) )
        {
            g_printerr("Elements of recordbin %d could not be linked. Exiting.\n", i);
            return -1;
        }
    }

    qpad = gst_element_get_static_pad(queue_pre_parser, "src");
    if (!qpad)
        g_print("Unable to get src pad from queue_pre_parser\n");
    else
        gst_pad_add_probe(qpad, GST_PAD_PROBE_TYPE_BUFFER,
                            tiler_src_pad_buffer_probe, &container, NULL);

}

nvstreammux is used to make multiple sources into a batch. Your pipeline is wrong.

If you don’t need inferencing, nvstreammux is not needed.

So by that statement, i need 1 pipeline for each source?

Yes.

Hi, i try below pipelines:

pipeline[0]: rtspsrc → rtph264depay → h264parse → recordbin1
pipeline[1]: rtspsrc → rtph264depay → h264parse → recordbin1
pipeline[N]: rtspsrc → rtph264depay → h264parse → recordbinN

I loop through all provided rtsp source link and create pipeline for each source. It works. But after a couple of success recording (start and stop), when recording start it shows error as below:

Recording stream 0 started..
Recording stream 1 started..
Recording stream 2 started..
ERROR from element mux_elem1: Could not multiplex stream.
Error details: gstqtmux.c(4561): gst_qt_mux_add_buffer (): /GstPipeline:stream-C85453656/GstBin:record_bin1/GstBin:enc_bin1/GstQTMux:mux_elem1:
Buffer has no PTS.
Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261

Then bus loop receive EOS and the run loop exit. But when i try deepstream_testsr, Those error never occured. Can you give me a pointer?

Thank you!

The muxer me some wrong data. It is hard to know the reason by the log.

Do you have a recommendation how to debug the error? When i increase the total source, the error ocurred early.

The elements in your pipeline are all 3rd party open source plugins from gstreamer. Basic tutorial 11: Debugging tools

Thank you for the response.

I’ve tried deepstream-testsr from sample-apps. It works fine when bounding box is included, but error occured just like my case when i disable bounding box (-e 0 flag).

What do you mean by " when i disable bounding box ( -e 0 flag)"?

From deepstream-testsr readme it says: To disable the bbox in the recorded video use "-e 0" at the end of command.

Example: ./deepstream-testsr-app rtsp://127.0.0.1/video1 -e 0

It has nothing to do with smart recording. There is something wrong happened with your rtsp stream. The received buffer contains wrong data.