Split pipeline after pgie

• Hardware Platform (Jetson / GPU): Jetson
• DeepStream Version: 5.1.0
• JetPack Version (valid for Jetson only): 4.5.1
• TensorRT Version: 7.1.3.0
• Issue Type: Question

Hi all,

I want to build a pipeline like this;

There will be three different rtsp sources.

All of them will be inferred with the same detection model.

At this point, pipeline should be split into three. First one will track objects and use classifier models with the detection output came from the first rtsp source. Second one will analyze overcrowding status with the second source outputs. Third one will create heatmap with the output of third source. I am kinda new with the Deepstream and Gstreamer and could not figure out how to build the pipeline. I am trying to find the optimal solution and I do not think creating three pipelines will be efficent.

Thank you in advance for help

If you want to customize your own pipeline, it is very important to have gstreamer and deepstream knowledge and skills, or else it will be hard for you to understand how the pipeline work, and where to add the functions you need.

The model will do inferencing with batch. There is source_id in frame meta, so you can decide which meta info is needed and which are not. MetaData in the DeepStream SDK — DeepStream 6.3 Release documentation

nvanalytics configuration can be configured with stream id. Gst-nvdsanalytics — DeepStream 6.3 Release documentation

Where did this heatmap come from? From the classifier model?

Hi Fiona,

I am familiar but do not have much experience with Deepstream and Gstreamer. To be clear, I did all of these with only one rtsp source, I do not have problem about how to configure nvanalytics or creating heatmap and streaming it to a local server. I just have to do these processes seperately for different sources. The common step is doing inference with pgie model for all three sources. After that I will use the pgie output of first source to track and classify objects, outputs of the second source to analyze overcrowding status of the given area and outputs of the third source to create and stream the heatmap. Something like this (maybe gst-tee element could be useful for this);

I can create three different pipeline or three different application for each source and do it all easily but the common step could be problem, because that means I will deserialize the same model three times and that will cause insufficent RAM.

You don’t need to do with multiple pipelines. Just use the source_id in frame_meta to seperate the information from different source.

But I do not want to track and classify detected objects for the second and third source.
Also I don’t want to use nvanalytics for the first source. In the method you said, I understand that all these operations will actually be done, but I will not have used it, am I right ?

Deepstream can not support such function in one pipeline.

Ok then. Thanks for your concern.

Hello again,

I think it would be better to move on from this topic. I just returned to this issue again. It turns out that Gst-nvstreamdemux element does what I wanted. So I changed pipeline to this;

src0↘⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ↗ queue1→tracker→sgie→analytics(line-crossing)→videoconvert→sink
src1→streammux→pgie→streamdemux→ queue2→analytics(overcrowding)→videoconvert→sink
src2↗⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ↘ queue3→videoconvert→capsfilter→sink

It works well actually but I want to use udpsink at the end of the pipeline, so I should have something like this;

source0 and source3 are the line-crossing analysis cameras.
source1 and source 4 are the overcrowding analysis cameras.
source2 and source 5 are the custom heatmap application cameras.

I added some extra sources (3, 4, 5) to make you understand the existance of the elements nvstreammux-lc, nvstreammux-oc, nvstreammux-hm, nvstreamdemux-lc, nvstreamdemux-oc and nvstreamdemux-hm.

Here is the gstreamer debug logs (level 4): error_logs.txt (862.1 KB)

If there is only one source (no matter what the type is), system works well. But adding more sources cause this problem.

If you need more information to trace the problem, please ask.

UPDATE: Segmentation fault (core dumped) occured twice when running only with one source after processing about 40k frames.

Hi feyzi,

Suggest to open a new topic. Thanks

It seems like nv-streammux does not support stream with metadata as input anyways. I have found another solution.

1 Like