Trtexec generates different engines when using the same platform/machine with the same onnx model

Hi,

Every time i generate an engine on my machine (with a single GPU) and use the same onnx model, i get different engines.
The problem is, it results in different outputs of some heavy mathematical algorithm i run using the engines.

Is there any consistent and deterministic way to generate identical engines given identical models?

cmd:
trtexec --onnx=/path/to/model.onnx --saveEngine=/path/to/engine_TRT-7-1-3_CCC-7-5.trt --fp16 --verbose --iterations=10

log file is attached.
engine_TRT-7-1-3_CCC-7-5_220313_082743.log (2.5 MB)

TensorRT Version: 7.1.3
GPU Type: GeForce GTX 1650
Nvidia Driver Version: 460.27.04
CUDA Version: 11.2
Operating System + Version: ubuntu 18.04

Hi,

Could you please try on the latest TensorRT version 8.4
https://developer.nvidia.com/nvidia-tensorrt-8x-download

If you still face this issue, we recommend you to please share with us issue repro ONNX model.

Thank you.

Unfortunately my question is relevant only for versions 7.1.3 and 8.0.1 (it happens also on 8.0.1), these are the versions that exist on the platforms we need to support.
Should I understand from your answer that there is a way to generate engines deterministically?

Anyway, one of the onnx models i use:
good-sim.onnx (42.7 MB)

Hi,

If we want absolute determinism, we can try the AlgorithmSelector API. We may get satisfactory results much more easily using the builder cache.

Thank you.