The uff converter is pre-installed from the SDK manager.
So you don’t manually handle it.
If you flash and install all the components, you should import the uff module without issues.
Python 3.6.9 (default, Jul 17 2020, 12:50:27)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import uff
2021-01-04 10:54:49.797184: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
You can also check the installation with the following command:
$ dpkg -l | grep uff-converter-tf
ii uff-converter-tf 7.1.3-1+cuda10.2 arm64 UFF converter for TensorRT package
However, since uff is deprecated, it’s more recommended to convert your model into ONNX format for TensorRT.
Usually, our users use tf2onnx to achieve this.
Then you can inference a custom model by updating the model path and corresponding parameters (ex. class number, output layer name, …, etc.)