Hardware Platform: x86 with L4
DeepStream Version : 7.1
TensorRT Version : 10.8
How to reproduce the issue ? : build a dynamic engine from an ONNX model with some input profile - e.g. min 1x3x640x640 opt 5x3x640x640 max 8x3x640x640 , then run the model in a deepstream pipeline
I have a dynamic batch size engine, which seems to work fine with trtexec, but in the deepstream pipeline it produces no outputs.
So the input shape is set to dynamic as intended? I did specify min,opt,max as you can see in the previous post and there is no option to force the outputs to be dynamic?
I was assuming they are set to the input dimensions at runtime?
anything?
I am happy to share the model (onnx and the converted engine) - this is a fairly standard export from pth to ONNX with dynamic axes, so I am lost at why this doesnt translate to deepstream?
I think the outputs did not get generated for other reasons - the onnx model is fine , I checked with Netron and trtexec - it is just confusing that the reporting for dynamic models in deepstream comes back with this conflicting and non-sensical information.
This masked the other issue, so would be great if the output dimensions could be reported properly at runtime.
When loading this engine into deepstream, it displays the dimensions correcly (as shown earlier), but the custom box conversion function does seem to get called on the same frame now all the time.
Just converting the ONNX model with trtexec, without specifying a profile, will yield an engine that seems to work - but the batched input is gone.
Any ideas how I can use the python engine building script - which I also want to use for calibration and quantization, to produce a working engine?
3.I used the TRT-10.3 docker image to export onnx, but it didn’t work with DeepStream. I don’t know why. It may be a issue in the documentation. I don’t know much about TRT. You can try to get more help at