Mux two pipelines after Demuxing

We have 2 rtsp cameras runinng simultaneously, and we want both running the primary model (car detection) but we only want the 1st one to run de 2 ºand 3º models (LPD and LPR).

For achieving it, we have the next pipeline, that mux both sources until they get demuxed after primary model (car detection) and tracker. After that, one of them goes to a queue, and the other one goes through the 2 and 3 models (LPD and LPR).At the end of this process they are muxed again.

PIPELINE_GRAPH.pdf (40.5 KB)

When i run the program i get the next errors, using this command ( sudo GST_DEBUG=3 ./deepstream-nvdsanalytics-test -i rtsp://admin:Aventum@192.168.8.40:554/live/ch0 -i rtsp://192.168.8.120/av0_0 -p …/…/…/…/lib/libnvds_kafka_proto.so --conn-str=“192.168.8.25;9070” --topic=entrada -s 1):

error_log (16.6 KB)

For further information, im atacching the .cpp file.

deepstream_nvdsanalytics_test_debut.cpp (57.9 KB)

Thank you in advance.

Hi,
You may set the properties in nvinfer plugin to decide whether doing inference is required:

  unique-id           : Unique ID for the element. Can be used to identify output of the element
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 15
  infer-on-gie-id     : Infer on metadata generated by GIE with this unique ID.
                        Set to -1 to infer on all metadata.
                        flags: readable, writable, changeable only in NULL or READY state
                        Integer. Range: -1 - 2147483647 Default: -1

FYR, here is a gst-launch-1.0 command for running two primary inferences:
Adding a ghost pad after splitting a pipeline using Tee? - #11 by DaneLLL

Hi DaneLLL,

thanks for your reply.

the problem is that we want to process through the 2nd and 3rd models (LPD and LPR) only the information that we get from the first rtsp, and if i didnt missunderstood, you are proposing to difference whether to infer in function of the ID of the other infers.

Could we actually select where to do the inference in function of the Source_id?

thank you! :)

Hi,
For reference, please share the information:
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

• Hardware Platform (Jetson / GPU)
Jetson Nano /Architecture NVIDIA Maxwell™ 128 core NVIDIA CUDA®
• DeepStream Version
5.1
• JetPack Version (valid for Jetson only)
4.5.1
• TensorRT Version
7.1.3-1+cuda10.2
• NVIDIA GPU Driver Version (valid for GPU only)
Cuda compilation tools, release 10.2, V10.2.89
450.51( not sure)

Hi,
We have discussion about this and confirm it is not supported in current release. For this use-case it would require multiple nvstreammux plugins. In current releases, it supports single nvstreammux plugin. We are evaluating to support multiple muxers in future release.