Deepstream primary infer use indicated sources

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version DS 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 515
• Issue Type( questions, new requirements, bugs) questions
Can I select in DeepStream’s config on which source id’s my primary infer model will work?
For example I want my source0 and source2 to work on primary infer, but I want source1 and source3 to work on my secondary infer model, or do I need to run two seperate deepstreams?

You need to run two seperate pipelines. It is a waste to batch the streams not in use.

Okay, thank you for the reply. I have one more situation, for example I have a source, but from it I have two ROI’s which I want to analyse and for one ROI I want to use one model and for another ROI I want to use another model, is this possible to implement with DeepStream config or should I use two pipelines for this situation as well?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

There are two ways:

  1. With gst-nvinfer ROI parameters, “roi-top-offset” and “roi-bottom-offset” can define a horizontal cross ROI in the video for inferencing. Gst-nvinfer — DeepStream 6.2 Release documentation
  2. With nvdspreprocess, nvdspreprocess can define multiple rectangle ROI for inferencing, in this case, the gst-nvinfer “input-tensor-from-meta” parametre should be enabled to skip the inner nvinfer preprocess. Gst-nvdspreprocess (Alpha) — DeepStream 6.2 Release documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.