Error using Uff to convert

I am trying to optimize some networks (mobilenets and inception) using uff on a Jetson Nano.
In particular I’m using the github repository, and when running the script the conversion from frozen graph to uff cause some warnings on an unsupported operation FusedBatchNormV3. After that the script try to parse the uff file to create the TensorRT engine and the unsupported operation cause an error so the parse fails.
I had also tried to implement the same code using the python API and the problem is exactly the same.
Instead using my pc to create the uff files with python API and then creating the engine on the Jetson Nano the problem seems to be solved, probably because on the pc I have the latest version of TensorRT (5.1.5) and uff (0.6.3) while on the Jetson Nano the versions included in Jetpack 4.2 are TensorRT 5.0 and uff 0.5.5.
Is this expected? I thought that the code in that repository is supposed to work without problem on Jetson Nano.
And is possible to update TensorRT on Jetson Nano even without a new release of Jetpack?

Hi simoneluetto,

Thanks for sharing this information. Which version of TensorFlow are you using on your PC vs. Jetson?

My guess is that when the model is constructed, TensorFlow is selecting a different BatchNorm implementation, which is not recognized by the UFF converter.

The repository tf_to_trt_image_classification, originally targeted TF1.5, but perhaps there is a workaround to handle this version discrepancy.


Hi jaybdub, thanks for the reply.
On both the PC and the Jetson Nano the tensorflow version is 1.13.1, so i think that the difference is caused by the tensorRT and uff versions.
The thing that really interested me is to understand why the jetson nano couldn’t run the repository I mentioned without errors, because is one of the few repository with conversion code and it is supposed to run on the Nano.
I also tried to follow exactly the steps on the README file, is normal that it doesn’t work or I have to check for errors?