We have 2 rtsp cameras runinng simultaneously, and we want both running the primary model (car detection) but we only want the 1st one to run de 2 ºand 3º models (LPD and LPR).
For achieving it, we have the next pipeline, that mux both sources until they get demuxed after primary model (car detection) and tracker. After that, one of them goes to a queue, and the other one goes through the 2 and 3 models (LPD and LPR).At the end of this process they are muxed again.
When i run the program i get the next errors, using this command ( sudo GST_DEBUG=3 ./deepstream-nvdsanalytics-test -i rtsp://admin:Aventum@192.168.8.40:554/live/ch0 -i rtsp://192.168.8.120/av0_0 -p …/…/…/…/lib/libnvds_kafka_proto.so --conn-str=“192.168.8.25;9070” --topic=entrada -s 1):
Hi,
You may set the properties in nvinfer plugin to decide whether doing inference is required:
unique-id : Unique ID for the element. Can be used to identify output of the element
flags: readable, writable, changeable only in NULL or READY state
Unsigned Integer. Range: 0 - 4294967295 Default: 15
infer-on-gie-id : Infer on metadata generated by GIE with this unique ID.
Set to -1 to infer on all metadata.
flags: readable, writable, changeable only in NULL or READY state
Integer. Range: -1 - 2147483647 Default: -1
the problem is that we want to process through the 2nd and 3rd models (LPD and LPR) only the information that we get from the first rtsp, and if i didnt missunderstood, you are proposing to difference whether to infer in function of the ID of the other infers.
Could we actually select where to do the inference in function of the Source_id?
Hi,
For reference, please share the information: • Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only)
Hi,
We have discussion about this and confirm it is not supported in current release. For this use-case it would require multiple nvstreammux plugins. In current releases, it supports single nvstreammux plugin. We are evaluating to support multiple muxers in future release.