DeepStream with model with dynamic outputs - best approach?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : GPU (RTX 4090), Ubuntu 22.04
• DeepStream Version: 7.1
• TensorRT Version: 10.8
• NVIDIA GPU Driver Version (valid for GPU only): 570
• Issue Type( questions, new requirements, bugs): Question

Hello, I am attempting to use DeepStream with a model that has dynamic outputs( sizes [-1,4], [-1], and [-1]). I tried to start DeepStream but received errors which apparently are due to the -1 in the output shape. I have been able to successfully host this model on a triton server, but have not been able to find much documentation for using DeepStream with Triton. What would be the recommended method here? If it is DeepStream with Triton, would you mind providing links to documentation that I can use for setting up those config files?

Thank you.

Can you provide your ONNX model?

DeepStream support Triton by the gst-nvinferserver. Please refer to Gst-nvinferserver — DeepStream documentation

Sent you the ONNX model through direct message. I will review that documentation, thank you. Note the ONNX shape appears as 0 instead of -1, but when converting with trtexec and using the plan file it ends up being -1.

If you can remove the “2621” and “2622” output layers, the model can be used with nvinfer.

These correspond to conf scores and class numbers, so sadly I cannot remove them. The standard output is the bbox coordinates. This is a object detection (FasterRCNN) model. Is there any other solution?

Only the first dimension dynamic is supported. Please modify your model.

The first dimension of both 2621 and 2622 is dynamic as well. Did you mean only the batch dimension can be dynamic? As for 2621 and 2622 it is the number of detections that controls their dynamic dimension.

Yes. Only the batch dimension can be dynamic