How to use Multi models for different streams

**• Hardware Platform (Jetson / GPU)**Nvidia geforce rtx 3090
• DeepStream VersionDeepstream 7.1
• TensorRT Version10.3
**• NVIDIA GPU Driver Version (valid for GPU only)**560.35.03
I have set up deepstream pipeline for multistream using deepstream_test_3.py for object detection but here all the streams uses same model for inference. how can i use multiple model for different streams. i should be able to decide which model to be used by the stream.

Do you mean you want to dynamically change which stream to be inferenced by which model?

If you want the different streams to be inferenced by different models, they should be multiple inferencing pipelines.

stream0 → model 0 → output 0
stream1-> model1 → output1
stream2 → model 2 → output 2

No, I wanted to use a single pipeline with multiple pgies . Based on the stream id of the data from nvstreammux i should be able to decide which pgie the stream should be send for inference.

What do you mean by this? Should the different streams be inferred by different models? Will the stream be changed dynamically?

Initially the streams will be sent to the pipeline as one one but later just before the inference at pgie it should be split accordingly based on stream id and should be send to the appropriate pgie assigned for that stream.

yes, different streams will be inferred by different models .
No, the streams will not be changed dynamically.

nvstreammux is used to combine streams into batch which can be inferred by the model more efficiently. Since your requirement is “different streams will be inferred by different models”, the streams are independent, you should use multiple pipelines in one process but not one pipeline.

isnt it possible to seperate the batched tensor data to split based on the stream id?

What tensor data do you mean? In the separated pipelines, the tensor data is already independent, no extra effort is needed.

i mean if i am using single pipeline isnt its possible to filter streams inside the pipeline based on stream id and sent to a particular pgie?

Filter from what?

What tensor data are you talking about? Different models need different tensor inputs, how can you mix them together?

the batched stream data of multiple streams coming from nvstreammux. isnt it possible to filter these batched stream data based on stream id and sent to a particular pgie?

There is no tensor data in batch data.

gst-nvinfer only supported the batched data, it is no meaning to filter stream from the batch.

If you don’t want to use batch, nvinfer does not support it.


its possible to use multiple pgies with this approach.

For your case, it is no help.

No , this is exactly what i was asking about.

You can use the sample, it is open source.