I read in previous posts that it is advised to generate the trt engine on the same machine thant uses it for inference. Moreover, all tensorrt samples generate the engine before inference.
More precisely, what are the specs that should be identical on the machine that generate the engine file and the machine used for inference ?
I think that
- tensorrt version
- cuda version
- cudnn version
must be identical.
But what about the driver version for example ?
Are there some other specs that should be the same ?
Thanks for yout help :)