Is TensorRT inference deterministic/reproducibile?

So I was searching the net, but I still can’t find a clear answer to the question: is TensorRT inference deterministic?

I have seen that the developer guide has a section about determinism of the builder:

My question is about what happens afterwards:

  1. If I use the same engine to do inference on the same data, will I always get the same (bit correct) results?
  2. Are there perhaps some layers that are deterministic and others that are not? If so, is there a list somewhere available?
  3. If I use the algorithm selector to make the builder deterministic, can I also reproduce the same inference results on two different GPUs?
1 Like

Hi @daniel.widmann,
If you are using same engine with same input, TensorRT should be deterministic.
However I don’t think engine building is supposed to be deterministic as tactics are chosen based on observed runtime. If you’re outputting your log with info level, you should be able to compare tactic selection between the two engines. Since different tactics/kernels could change order of operations, you would expect floating point differences.
You can refer to the below link.


Thank you for your fast answer. It is very good to hear that TensorRT can be deterministic!

Regarding your example: It does indeed show a way to make the builder deterministic, as well, which is very interesting. In the example, custom algorithm selectors are provided to cache chosen tactics and to read them back in the next build. So, to come back to my third question ones more: Could I use this approach to cache my chosen tactics, then build the network on a different GPU using the cached tactics, and finally get a network that behaves the same on both GPUs? Or is it unavoidable that two different types of GPUs will always have small deviations in the network output?