The same onnx file produces engine files with different size under same configurations

the Tensor-RT version used is 8.0.1.6 Win10, CUDA 11.1.1, cudnn v8.2, I generate engine file as following
.\trtexec --onnx=lanenet_free_graphy.onnx --saveEngine=lanenet_trt8.0.1.6.engine --verbose

the first time, the size of lanenet_trt8.0.1.6.engine is 9.991KB
the second time, the size of lanenet_trt8.0.1.6.engine is 9.962KB
the third time, the size of lanenet_trt8.0.1.6.engine is 10434KB

It seems that the size of engine file fluctuates around 10KB,not a fixed number,why?
Thanks in advance.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

I use your code to check if the onnx model is valid, the result is valid

the engine files are :
lanenet_trt8.0.1.6v1.engine (9.8 MB)
lanenet_trt8.0.1.6v2.engine (9.7 MB)
lanenet_trt8.0.1.6v3.engine (10.2 MB)
lanenet_trt8.0.1.6v4.engine (9.8 MB)

here are my onnx model and verbose log

lanenet_free_graphy.onnx (9.0 MB)

and the log
lanenet_trt8.0.1.6.engine.log (2.3 MB)

Hi,

We couldn’t reproduce the same issue.
Could you please try on the latest TebsorRT version 8.4.

https://developer.nvidia.com/nvidia-tensorrt-8x-download

Also please confirm with us, are you using the same machine for everytime build?

Thank you.

OK,I will have a try on TensorRT v8.4