• Hardware Platform (Jetson / GPU) Jetson Xavier
• DeepStream Version DS 6.0
• JetPack Version (valid for Jetson only) 4.6 (L4T 32.6.1)
• TensorRT Version 8.0.1
**• Issue Type question
Hello everyone!
I am running an inference pipeline which resembles to deepstream-test-3 from NVIDIA-AI-IOT GitHub with DS 6.0 and i implemented a logic to restart the pipeline every time the camera changes stream.
My question is:
Can I somehow make the nvstreammux leave the output frame at the same height and width with the input one?
Or take the dimensions of the stream from the first element of the pipeline (source bin) which connects to the nvstreammux), so i can set the nvstreammux properties to them?
As the next element (nvinfer will make it the appropriate dimensions to be compatible with my model’s inference 640X640) i don’t want to perform another change at the nvstreammux, i think I do not need it and I loose information from the frame. What is your opinion on this?
Thanks a lot!