TF2-TRT Conversion Cross-Platform without engine

I have a short question regarding the tf.experimental.tensorrt.Converter in Tensorflow 2.1:

If I perform the conversion without building the engines, meaning only graph optimization and e.g. from FP32 to FP16, is the resulting .pb platform independent?

Concrete scenario is the following:

  • train model on workstation
  • perform conversion without building the engines on workstation
  • move resulting .pb to Jetson
  • Load the model with tf.saved.model.load and build the engines on the fly

Or should the conversion be done on the target platform (even if I don’t build the engines)?

Thank in advance!

You need to execute the conversion on the machine on which you will run inference. This is because TensorRT optimizes the graph by using the available GPUs and thus the optimized graph may not perform well on a different GPU.

Please refer below best practices link:
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#best-practices

Thanks

1 Like

thanks for the clarification!