Migrating DeepStream 5.0 competible model to 6.1/6.0.1/.6.0

Hardware Platform (Jetson / GPU)** GPU
DeepStream Version** DeepStream 6.0 , 6.0.1. 6.1
JetPack Version (valid for Jetson only)**
TensorRT Version** As in corresponding DeepStream Container (8.2.5 in 6.1, 8.0.1 in 6.0
CUDA VERSION (11.4 in 6.1, 11.3 in 6.0.1 and 6.0)
NVIDIA GPU Driver Version (valid for GPU only)** 515.65.01

Previously opened the discussion in DeepStream : Migrating DeepStream 5.0 competible model to 6.1/6.0.1/.6.0

I was running the conversion inside DeepStream Container 6.1 and 6.0.1 or 6.0 respectively. I am not sure why the ONNX model failed to be converted with trtexec (TensorRT 8.2.5) while successed under TensorRT ( 8.0.1). You may find the last message from trtexec ( TensorRT 8.2.5) in the discussion I have shared above.

Meanwhile, the converted engine in 8.0.1 could be able to be parsed in DeepStream Container 6.1(TensorRT 8.2.5) with single batch condition and failed in dynamic batch inference.

If there is/are any breaking change in TensorRT or DeepStream engine parser ?


Could you please share with us the issue repro ONNX model and complete verbose logs of trtexec command with the --verbose option for better debugging.

Thank you.