Import uff doesn't work on the jetson tx2

I want to convert the resnet18 model .pb ( trained on the custom dataset) file to .uff file format. But "import uff " not working even after installing it using( Sudo apt-get install uff-converter-tf). I am following the Github link below for object detection inference,

GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

tf version used:

jetpack version used = 4.4

CUDA Version 10.2.89

How should I install uff and convert the tf model to uff?
Also, how should we include custom models along with the available default models networks ?

Hi,

The uff converter is pre-installed from the SDK manager.
So you don’t manually handle it.

If you flash and install all the components, you should import the uff module without issues.

$ python3
Python 3.6.9 (default, Jul 17 2020, 12:50:27) 
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import uff
2021-01-04 10:54:49.797184: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
>>> 

You can also check the installation with the following command:

$ dpkg -l | grep uff-converter-tf
ii  uff-converter-tf                              7.1.3-1+cuda10.2                                 arm64        UFF converter for TensorRT package

However, since uff is deprecated, it’s more recommended to convert your model into ONNX format for TensorRT.
Usually, our users use tf2onnx to achieve this.

Then you can inference a custom model by updating the model path and corresponding parameters (ex. class number, output layer name, …, etc.)

Thanks.

1 Like