We are optimising a Tensorflow_v1 frozen inference graph for TensorRT using the guide on https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html .
The optimized graph is built on the Jetson Nano. However it will not run on Xavier. Both the Xavier and the Nano use Jetpack 4.4.
How can we build a TensorRT graph compatible with all Jetson devices?
Really appreciate your help as this is making it really difficult for us to recommend to clients to use Jetson devices.