I would like to run an Xception_71 model from the DeepLab Model Zoo  trained on Cityscapes dataset on my PX2 (AutoChaffeur, Ubuntu 16.04, AArch64) using TensorRT. To do that, I obviously need to convert its frozen inference graph into a UFF file; as far as I understand (please correct me if I am wrong), I do not have to do that on PX2, but can also use a different machine, as UFF files are not platform- or GPU-specific. The downloaded model consists of the three files:
My question is whether I just need to run the UFF converter on frozen_inference_graph.pb as described in the SDK documentation , i.e. run
to get my UFF file, or I am missing something essential here.
Many thanks in advance!