Multiple input streams with multiple primary and secondary inference

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, I have a question!
So lets consider that I have 13 input streams, and I want to have the first 5 input streams to infer person config, and the next 5 streams to infer vehicle, and the last 3 frames to infer animals…
So, how do I create a pipeline or script for the same where different streams work with different primary, and secondary inference?

Hi,
You will need to create multiple pipelines.

So lets consider that I have 13 input streams, and I want to have the first 5 input streams to infer person config, and the next 5 streams to infer vehicle, and the last 3 frames to infer animals…

For each primary inference you want to make you will need a separate pipeline. Batch the first 5 streams and use them as input for the person primary, then in another pipeline batch the next streams and pass them to the next primary, and so on. If you need to process all the streams afterwards, you will need to do some metadata manipulation to join all metadata and transfer it to another pipeline with a batch of all the streams.

We have some example DeepStream pipelines: DeepStream pipelines - RidgeRun Developer Connection

I would need more information of your application to be able to give a more detailed answer.

2 Likes

@miguel.taylor , Thank you for you reply!
@rishika.varshiinirao Kindly let us know if you still need support on this topic, thanks.

In openvino ,we can share the model instance id through the streams, so I want to know if, once the models are loaded on the gpu, and if I can use the model’s instance-id sort in deepstream to avoid loading of models on the gpu multiple times.

I hope I am clear.

yes I do!

You can refer to the parallel multiple models sample: NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. (github.com)

Here every parallel model is different…
But, what I want is that, If one camera is running “Vehicle, and Licence plate”… Now that the vehicle model is already loaded… and the second camera is running “Vehicle and vehicleType”.

How to use the same Vehicle model for the second camera which is already loaded by the first camera… Instead of loading it again?

@rishika.varshiinirao You can pass all the camera streams through all the secondaries, so both cameras would have “Vehicle, Licence plate, and vehicleType”. I assume you don’t want to do this to avoid unnecessary processing, but it is the easiest solution.

A cool feature to solve this kinds of problems would be an extra property added to nvinfer to specify the batch id we want to operate, something like operate-on-gie-id for the frame index inside a batch. But currently there is no such option on DeepStream. As far as I know you would need to operate on all streams or have separate pipelines loading the models used on each pipeline.

There is a third option using nvinferserver and Triton to serve the models and consume them from multiple pipelines, effectively loading each model once. We tried doing this on DeepStream 5.0 but each nvinferserver element still used its own model instance. I don’t know if this changed in more recent DeepStream versions.

Is there any python implementation for the same??

No.

Any other help that I can get for my reference?

To build the pipeline in python?

Yes, you can transfer the c/c++ app to python with python bindings. NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications (github.com)

I am supposed to clone the parallel repo inside the pythonapps and then run the steps for binding right?

No. You need to rewrite the app with python bindings.

Is there any detailed explanation on tee, and queues… How to use them in the pipeline with proper demux and streammux?

They are all provided by GStreamer community. Please refer to GStreamer document and source code. queue (gstreamer.freedesktop.org)
tee (gstreamer.freedesktop.org)