the Tensor-RT version used is 220.127.116.11 Win10, CUDA 11.1.1, cudnn v8.2, I generate engine file as following
.\trtexec --onnx=lanenet_free_graphy.onnx --saveEngine=lanenet_trt18.104.22.168.engine --verbose
the first time, the size of lanenet_trt22.214.171.124.engine is 9.991KB
the second time, the size of lanenet_trt126.96.36.199.engine is 9.962KB
the third time, the size of lanenet_trt188.8.131.52.engine is 10434KB
It seems that the size of engine file fluctuates around 10KB,not a fixed number,why?
Thanks in advance.
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
- validating your model with the below snippet
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
I use your code to check if the onnx model is valid, the result is valid
the engine files are :
lanenet_trt184.108.40.206v1.engine (9.8 MB)
lanenet_trt220.127.116.11v2.engine (9.7 MB)
lanenet_trt18.104.22.168v3.engine (10.2 MB)
lanenet_trt22.214.171.124v4.engine (9.8 MB)
here are my onnx model and verbose log
lanenet_free_graphy.onnx (9.0 MB)
and the log
lanenet_trt126.96.36.199.engine.log (2.3 MB)
We couldn’t reproduce the same issue.
Could you please try on the latest TebsorRT version 8.4.
Also please confirm with us, are you using the same machine for everytime build?
OK,I will have a try on TensorRT v8.4