Split pipeline after pgie

• Hardware Platform (Jetson / GPU): Jetson
• DeepStream Version: 5.1.0
• JetPack Version (valid for Jetson only): 4.5.1
• TensorRT Version: 7.1.3.0
• Issue Type: Question

Hi all,

I want to build a pipeline like this;

There will be three different rtsp sources.

All of them will be inferred with the same detection model.

At this point, pipeline should be split into three. First one will track objects and use classifier models with the detection output came from the first rtsp source. Second one will analyze overcrowding status with the second source outputs. Third one will create heatmap with the output of third source. I am kinda new with the Deepstream and Gstreamer and could not figure out how to build the pipeline. I am trying to find the optimal solution and I do not think creating three pipelines will be efficent.

Thank you in advance for help

If you want to customize your own pipeline, it is very important to have gstreamer and deepstream knowledge and skills, or else it will be hard for you to understand how the pipeline work, and where to add the functions you need.

The model will do inferencing with batch. There is source_id in frame meta, so you can decide which meta info is needed and which are not. MetaData in the DeepStream SDK — DeepStream 5.1 Release documentation

nvanalytics configuration can be configured with stream id. Gst-nvdsanalytics — DeepStream 5.1 Release documentation

Where did this heatmap come from? From the classifier model?

Hi Fiona,

I am familiar but do not have much experience with Deepstream and Gstreamer. To be clear, I did all of these with only one rtsp source, I do not have problem about how to configure nvanalytics or creating heatmap and streaming it to a local server. I just have to do these processes seperately for different sources. The common step is doing inference with pgie model for all three sources. After that I will use the pgie output of first source to track and classify objects, outputs of the second source to analyze overcrowding status of the given area and outputs of the third source to create and stream the heatmap. Something like this (maybe gst-tee element could be useful for this);

I can create three different pipeline or three different application for each source and do it all easily but the common step could be problem, because that means I will deserialize the same model three times and that will cause insufficent RAM.

You don’t need to do with multiple pipelines. Just use the source_id in frame_meta to seperate the information from different source.

But I do not want to track and classify detected objects for the second and third source.
Also I don’t want to use nvanalytics for the first source. In the method you said, I understand that all these operations will actually be done, but I will not have used it, am I right ?

Deepstream can not support such function in one pipeline.

Ok then. Thanks for your concern.