Gst-nvstreammux different resolution for heterogeneous streams

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Orin Devkit/ RTX 3080
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5
• Issue Type( questions, new requirements, bugs) questions

Is this possible to have different sizes (resolution (width and height) for individual streams inside nvstreammux or should I just skip the nvstreammuxer for this use-case altogether, and use separate sub pipelines for different-sized streams?

I think that is one of the more important changes in the “new” nvstreammux. See the documentation for heterogeneous batching.

oh wow, this sounds like a very plausible way forward I will check that out. However, I am left wondering what side-effects will it bring? Possibly some performance penalties for heterogeneous batching?

nvstreammux can support heterogeneous streams(different resolution), it will scale different resolution to the same resolution.
if using new nvstreammux , please refer to the link above, you need to use nvvideoconvert to scale different resolution to the same resolution, both will do scaling.

This isn’t what I am looking for, allow me to elaborate a little, imagine there are 2 cameras,
camera1: 3840 x 2160
camera2: 1920 x 1080

I wanted to know if there is a way to preserve these resolutions throughout the pipeline. I understand that the nvinfer plugin would require a homogeneous size for a forward pass. My question is aimed at eliciting suggestions to achieve this in the best possible way. There is a reason camera1 is 4k, downscaling it to 1920x1080 would result in loss of detail, whereas, upscaling 1920x1080 would mean a waste of computational resources. My hunch was to have separate sub-pipelines like:

Source1 → nvinfer \
-----------------------------> nvtracker → tiler → nvosd
Source2 → nvinfer /

the streammux 's output resolution is configurable, the output resolution should be model’s input resolution.
from your pipeline graph, if using the same model, the scaling is not inevitable, if using different model, we recommend using this sample: deepstream_parallel_inference_app, it can support to do inference parallelly.

1 Like

Thank you for valuable pointers Fanzh. I will check the sample out.

thanks for the update! If need further support, please open a new one. Thanks

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.