All of them will be inferred with the same detection model.
At this point, pipeline should be split into three. First one will track objects and use classifier models with the detection output came from the first rtsp source. Second one will analyze overcrowding status with the second source outputs. Third one will create heatmap with the output of third source. I am kinda new with the Deepstream and Gstreamer and could not figure out how to build the pipeline. I am trying to find the optimal solution and I do not think creating three pipelines will be efficent.
If you want to customize your own pipeline, it is very important to have gstreamer and deepstream knowledge and skills, or else it will be hard for you to understand how the pipeline work, and where to add the functions you need.
I am familiar but do not have much experience with Deepstream and Gstreamer. To be clear, I did all of these with only one rtsp source, I do not have problem about how to configure nvanalytics or creating heatmap and streaming it to a local server. I just have to do these processes seperately for different sources. The common step is doing inference with pgie model for all three sources. After that I will use the pgie output of first source to track and classify objects, outputs of the second source to analyze overcrowding status of the given area and outputs of the third source to create and stream the heatmap. Something like this (maybe gst-tee element could be useful for this);
I can create three different pipeline or three different application for each source and do it all easily but the common step could be problem, because that means I will deserialize the same model three times and that will cause insufficent RAM.
But I do not want to track and classify detected objects for the second and third source.
Also I don’t want to use nvanalytics for the first source. In the method you said, I understand that all these operations will actually be done, but I will not have used it, am I right ?
I think it would be better to move on from this topic. I just returned to this issue again. It turns out that Gst-nvstreamdemux element does what I wanted. So I changed pipeline to this;
source0 and source3 are the line-crossing analysis cameras.
source1 and source 4 are the overcrowding analysis cameras.
source2 and source 5 are the custom heatmap application cameras.
I added some extra sources (3, 4, 5) to make you understand the existance of the elements nvstreammux-lc, nvstreammux-oc, nvstreammux-hm, nvstreamdemux-lc, nvstreamdemux-oc and nvstreamdemux-hm.
Here is the gstreamer debug logs (level 4): error_logs.txt (862.1 KB)
If there is only one source (no matter what the type is), system works well. But adding more sources cause this problem.
If you need more information to trace the problem, please ask.
UPDATE: Segmentation fault (core dumped) occured twice when running only with one source after processing about 40k frames.