Deepstream custom pipeline

Brief : Need guidance for a custom deepstream pipeline.

Hardware and software details :
Jetson Orin Nano
Deepstream: 6.4-multiarch
JetPack Version: 5.2
Issue Type: questions

I am working on a custom pipeline using peopleNet model, my current pipeline is as follows,

appsrc -> nvvideoconvert -> capsfilter -> nvstreammux -> nvinfer -> nvtracker -> nvvideoconvert -> capsfilter -> nvvideoconvert(rgba) -> appsink

this lets me use my custom video-source and push frames to the appsrc and receive results from the appsink.
But this approach makes new nvinfer for each new video-source which occupies a lot of RAM on the device. I need to make a pipeline which can handle multiple feeds and use a single nvinfer for processing the detection and multiple nvtracker. I am confused as to how to go about connecting the pipeline and what new gst-elements will be used.
I can definitely use the appsrc for sending multiple inputs one-by-one and increase the overall fps of nvinfer element, I am not sure how nvtracker will behave. I have read the multiple source example in deepstream-example-apps, but using appsrc and custom nvds for feed-id will deliver the multiple-source part without using nvstreammux, but please suggest an approach with having appsrc as a compulsary element.

Please help in the above mentioned.
Thanks.
Pravesh.

What does this mean? The nvtracker can track all objects detected, why do you need multiple nvtrackers?

What is your source?

My source is a libav implementation which provided me a method to get frame in cv::Mat object which then i send to appsrc with help of gst_app_src_push_buffer after converting it to GstBuffer,

here is my input_callback that runs on signal need-data,

        auto *avFrame = OfflinePipeline::fp->get_processed_frame();
        if (avFrame) {
            /* Make buffer and push */
            FrameMeta *frame_meta = (FrameMeta*)g_malloc0(sizeof(FrameMeta));
            frame_meta->pts = avFrame->pts;
            frame_meta->segment_info = OfflinePipeline::fp->current_segment;
            auto *buffer = get_frame_buffer(avFrame, frame_meta);
            GstFlowReturn ret = gst_app_src_push_buffer((GstAppSrc*)appsource, buffer);
            if (ret == GST_FLOW_OK) {
                OfflinePipeline::pp->tlogger->debug("[pipeline] buffer pushed");
            } else OfflinePipeline::pp->tlogger->error("[pipeline] failed to push");

            av_frame_free(&avFrame);
        } else {
            OfflinePipeline::pp->tlogger->error("[pipeline] frame could not be decoded");
            goto retry;
        }

What is the original video of your source before you use libav to handle it? A local video file? Videos from camera device? Live video streaming such as http, rtsp, rtmp,…?

It is sometimes coming from redis as a protocol-buffer, and sometimes from a ts segment list, for the ts part, it is h264 encoded .ts segment.

Seems you are using appsrc to get ts stream from redis payload and decoding the ts h264 stream with libav.

The TS stream can be decoded by GStreamer and DeepStream elements.

There is tsdemux (gstreamer.freedesktop.org) to demux the ts stream, there is h264parse (gstreamer.freedesktop.org) to parse h264 stream. There is nvv4l2decoder to decode the h264 stream with hardware accelerated video decoder. The nvv4l2decoder output hardware buffers which can be the inputs of nvstreammux directly without any extra conversion or copy.

So what you need to do is just to write an appsrc who get the ts stream from the redis protocol stack and output the ts stream with “video/mpegts” caps.

I meant that in my input_callback, there are two types of applications in which it is called,

  • redis input
    • a protocol buffer is popped from redis and it contains the rgb frame-data in bytes which is converted to GstBuffer and pushed.
  • ts input
    • a playlist is used to get ts segments and they are decoded by libav which gives frame-data that this then converted to GstBuffer and pushed.

This approach is used to dynamically integrate the two inputs, what I want is to have one or many appsrc for multiple feeds which can be given to single nvinfer and nvtracker. I believe the new pipeline will be somewhat like this,

appsrc1 -> nvvideoconvert -> capsfilter -> | nvstreammux -> nvinfer .. rest all same
appsrc2 -> nvvideoconvert -> capsfilter -> |
appsrc3 -> nvvideoconvert -> capsfilter -> |
appsrcn -> nvvideoconvert -> capsfilter -> |

Or maybe a pipeline could look like this,

appsrc -> nvvideoconvert -> capsfilter -> (here we can make multiple streams based on feed-id in meta) -> nvstreammux -> nvinfer -> nvtracker ... rest all same

I am new to GStreamer intricacy, sorry for any mistakes.

DeepStream does work in this way.

The two types of inputs are different sources. The ts streaming source can be used directly with DeepStream. No libav is needed.
appsrc0 → tsdemux → h264parse → nvv4l2decoder →

The RGB frame source should be implemented with reading the data into GstBuffer and send to the downstream conversion element

appsrc1->nvvideoconvert->

appsrc0 -> tsdemux -> h264parse -> nvv4l2decoder -> |  nvstreammux -> nvinfer ->
appsrc1->nvvideoconvert-> capsfilter ->             |
...

Please answer the question too

I am sorry but you are suggesting big changes in my current pipeline, I can definitely do these things, but I want programmability.
If you can answer this question then that will help me a lot,
Can I connect multiple (appsrc->nvvidconv->caps_filter) to nvstreammux in sink_0, sink_1, sink_2 etc and as nvstreammux is linked to (nvinfer->nvtracker) will it handle the multiple inputs and in my appsink in which data-structure in NvDsBatchMeta will I get the multiple sink outputs?

Surely it can. Please make sure to set the “batch-size” of nvstreammux as to the number of your sources.

1 Like

Thank you very much, I will give this a try today :)

Hey @Fiona.Chen thanks it worked!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.