TRT engine across different driver version


I read in previous posts that it is advised to generate the trt engine on the same machine thant uses it for inference. Moreover, all tensorrt samples generate the engine before inference.

More precisely, what are the specs that should be identical on the machine that generate the engine file and the machine used for inference ?

I think that

  • tensorrt version
  • cuda version
  • cudnn version
    must be identical.

But what about the driver version for example ?
Are there some other specs that should be the same ?

Thanks for yout help :)

Hi @dbrazey,
The generated plan files are not portable across platforms or TensorRT versions. Plans are specific to the exact GPU model they were built on (in addition to the platforms and the TensorRT version) and must be re-targeted to the specific GPU in case you want to run them on a different GPU.
Please refer to the below link for same


Thanks for your answer.

I generated an engine on a machine.
I used the engine on a second machine, identical except the driver version (a higher version).
I got no problem, but I wanted to be sure on that point.

Could you please confirm me that the driver version must be exactly the same on the two platforms ?
Or Maybe a higher version driver can be fine ?
Do you have any informations on that point ?

1 Like

Hi @dbrazey,
Ideally it is suggested to use the exact same GPU, but if you are using higher driver version and facing no issues, that should be okay.