I have a short question regarding the
tf.experimental.tensorrt.Converter in Tensorflow 2.1:
If I perform the conversion without building the engines, meaning only graph optimization and e.g. from FP32 to FP16, is the resulting
.pb platform independent?
Concrete scenario is the following:
- train model on workstation
- perform conversion without building the engines on workstation
- move resulting
- Load the model with
tf.saved.model.loadand build the engines on the fly
Or should the conversion be done on the target platform (even if I don’t build the engines)?
Thank in advance!