Different versions of TensorRT get different model inference results

Description

I inference the groundingDino model using C++ TensorRT.

For the same model and the same image, TensorRT 8.6 can gets the correct detection boxes.

But when I update TensorRT to 10.4, can’t get detection boxes.

Possible model result error caused by TensorRT 10.4, How can I analyze this issue?

By the way, I’ve tried multiple versions other than 8.6 (eg 9.3, 10.0, 10.1), None of them get detection boxes.

Additional information below.

I load the save onnx model via C++ TensorRT and print the information for each layer.

TensorRT 8.6 loaded a model with 21060 layers and TensorRT 10.4 loaded a model with 37921 layers, why is the difference in the number of layers so large?

rt86_layers.txt (1.2 MB)
rt104_layers.txt (1.9 MB)

Environment

TensorRT Version: 8.6.1.6 / 10.4.0.26
GPU Type: GeForce RTX 3090
Nvidia Driver Version: 535.183.06
CUDA Version: 12.2

Relevant Files

Model link : grounddino.onnx - Google Drive