Failed to Convert frozen graph to TensorRT engine

I encountered an error of ImportError: No module named uff.
As I converted frozen graph to TensorRT engine with Jetpack3.3 without manaully install UFF.

I got another error as I converted frozen graph to TensorRT engine with Jetpack3.3 but manually install TensorRT 4 and uff.
The error message is as the below.

But I converted mobilenet_v1_0p25_128.plan with Jetpack3.3 and manaully installed TensorRT 3 and uff.

How to do tensorflow with tensorRT with Jetpack3.3 on TX2 exactly?

All the stpes I did are following the links below.
https://github.com/NVIDIA-Jetson/tf_to_trt_image_classification. (Jetpack 3.2 works)
https://devtalk.nvidia.com/default/topic/1038957/jetson-tx2/tensorflow-for-jetson-tx2-/1

error message :
nvidia@tegra-ubuntu:~/tf_to_trt_image_classification$ python scripts/convert_plan.py data/frozen_graphs/mobilenet_v1_0p25_128.pb data/plans/mobilenet_v1_0p25_128.plan input 128 128 MobilenetV1/Logits/SpatialSqueeze 1 0 float
Traceback (most recent call last):
File “scripts/convert_plan.py”, line 71, in
data_type
File “scripts/convert_plan.py”, line 22, in frozenToPlan
text=False,
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 149, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 41, in from_tensorflow
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/#python and click on the ‘TensoRT Python API’ link""".format(err))
ImportError: ERROR: Failed to import module (No module named graphsurgeon)
Please make sure you have graphsurgeon installed.
For installation instructions, see:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/#python and click on the ‘TensoRT Python API’ link

Thank you,

Hi,

ImportError: No module named uff

It looks like your UFF installation is not complete.

The steps should like this:
1. Reflash and install all the packages from JetPack Installer:
https://developer.nvidia.com/embedded/downloads#?search=jetpack%203.3

2. Install our official TensorFlow:
https://devtalk.nvidia.com/default/topic/1038957/jetson-tx2/tensorflow-for-jetson-tx2-/post/5278617/#5278617

3. Download UFF library from the website:
https://developer.nvidia.com/compute/machine-learning/tensorrt/3.0/ga/TensorRT-3.0.4.Ubuntu-16.04.3.x86_64.cuda-9.0.cudnn7.0-tar.gz

4. Extract the library

$ tar -xzf TensorRT-3.0.4.Ubuntu-16.04.3.x86_64.cuda-9.0.cudnn7.0-tar.gz

5. Install uff package

  • Copy TensorRT-3.0.4/uff/uff-0.2.0-py2.py3-none-any.whl to the TX2
$ sudo apt-get install python-pip
$ sudo pip install TensorRT-3.0.4/uff/uff-0.2.0-py2.py3-none-any.whl

Thanks.

Hi ,

Thank you for your prompt support.

May I ask the step5 command again.

$ sudo pip install TensorRT-3.0.4/uff/uff-0.2.0-py2.py3-none-any.whl

As I extracted TensorRT-4.0.1.6, I got the folder TensorRT-4.0.1.6 not TensorRT-3.0.4.

I did the step you said.

However, I got two different results as I did step5.

  1. converted to TensorRT engine succesfully, as I install TensorRT-3.0.4 uff.
    $ sudo pip install TensorRT-3.0.4/uff/uff-0.2.0-py2.py3-none-any.whl

  2. Failed to converted to TensorRT engine, as I install TensorRT-4.0.1.6 uff.
    $ sudo pip install TensorRT-4.0.1.6/uff/uff-0.2.0-py2.py3-none-any.whl

The error message:

nvidia@tegra-ubuntu:~/tf_to_trt_image_classification$ python scripts/convert_plan.py data/frozen_graphs/mobilenet_v1_0p25_128.pb data/plans/mobilenet_v1_0p25_128.plan input 128 128 MobilenetV1/Logits/SpatialSqueeze 1 0 float
Traceback (most recent call last):
File “scripts/convert_plan.py”, line 71, in
data_type
File “scripts/convert_plan.py”, line 22, in frozenToPlan
text=False,
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 149, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 41, in from_tensorflow
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/#python and click on the ‘TensoRT Python API’ link""".format(err))
ImportError: ERROR: Failed to import module (No module named graphsurgeon)
Please make sure you have graphsurgeon installed.
For installation instructions, see:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/#python and click on the ‘TensoRT Python API’ link
nvidia@tegra-ubuntu:~/tf_to_trt_image_classification$

Thank you,

Hi,

Sorry about that.

It looks like there is a dependency issue on TensorRT-4.0.1.6 uff package.
Please use TensorRT-3.0.4 instead:

sudo pip uninstall uff
sudo pip install TensorRT-3.0.4/uff/uff-0.2.0-py2.py3-none-any.whl

Thanks.

Hi AastaLLL,

Thank you for your information.
We will use TensorRT-3.0.4.

Could it affect performance?

Thank you,

Hi,

It’s should be okay.
Although parser is from TensorRT-3.0.4, you still use TensorRT-4.0 for inference.

Thanks.

Hi,

Great.
it’s good to know.

Thank you,

Hi All,

I am using tensorrt-3.0.4 uff package along with the recommended CUDA (9.0), CuDNN (7.0) and tensorflow (1.12.0) versions. However, I am facing an issue with writing the uff model using the “convert_to_uff.py” script at the location “/usr/local/lib/python2.7/dist-packages/uff/bin”. I have tried all permutations and combinations to dump the uff model but I have not been successful.

Could you please help me out and let me know how to save the uff model converted using the above script ?

Thanks

Hi,

It’s recommended to upgrade your TensorRT into v5.0.
We have fixed some issues in the uff parser.

Thanks.