sampleUffSSD on jetsonTX2

I want to use the sampleUffSSD in TensorRT 4.0 on JetsonTX2.
Referring to README.txt, I need to install UFF converter. The method is written on the link below.
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/index.html

I installed the PYTHON API using the installation command below, but it failed.
pip2 install tensorrt-4.0.1.3_16.04-cp27-cp27mu-linux_x86_64.whl

The error message is as follows.
Requirement ‘tensorrt-4.0.1.3_16.04-cp27-cp27mu-linux_x86_64.whl’ looks like a filename, but the file does not exist
tensorrt-4.0.1.3_16.04-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.

I find a topic that says ‘PYTHON API is not available for jetson platform’.
https://devtalk.nvidia.com/default/topic/1036899/jetson-tx2/tensorrt-python-on-tx2-/

Can UFF converter be used on JetsonTX2?

Hi,

TensorRT python API is not available on Jetson but converter works.
You can find the installation steps of the converter here:
[url]https://github.com/NVIDIA-Jetson/tf_to_trt_image_classification[/url]

Thanks.

Thanks for reply.

First, I installed UFF exporter following procedure.
1.sudo pip install tensorflow-1.5.0rc0-cp27-cp27mu-linux_aarch64.whl
2.sudo pip install TensorRT-4.0.1.6/uff/uff-0.4.0-py2.py3-none-any.whl

Next, get the pre-trained Tensorflow model (ssd_inception_v2_coco) from: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

Finally, convert to uff following command, but it failed.
convert-to-uff tensorflow --input-file frozen_inference_graph.pb -O NMS -p config.py

The error message:
File “/usr/local/bin/convert-to-uff”, line 11, in
sys.exit(main())
File “/usr/local/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py”, line 105, in main
output_filename=args.output
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 149, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 41, in from_tensorflow
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/#python and click on the ‘TensoRT Python API’ link"“”.format(err))
ImportError: ERROR: Failed to import module (No module named graphsurgeon)
Please make sure you have graphsurgeon installed.
For installation instructions, see:

To solve the problem, I think that it is nessesary to install “graphsurgeon”.
However, I did not khow hot to do it.

Please show me to solve this problem.

Hi,

graphsurgeon is included our TensorRT python API and is only available for x86 user.
Please convert your model on a desktop GPU and do inference on Jetson with the converted model afterward.

Thanks.

This is really unfortunate as the three required wheels are all python (major) version and architecture agnostic. You can script the extraction and installation of the required wheels.

I had to do so when containerizing the [url]https://github.com/NVIDIA-Jetson/tf_to_trt_image_classification[/url] repo. You can use a multi-stage docker file to extract the wheels and then install them. You can look here for an example [url]https://github.com/idavis/JetsonContainers/blob/master/docker/Examples/tf_to_trt_image_classification/Dockerfile[/url].

You must pass a URL arg which I’ve left out to source the archive as nvidia won’t let you download it publicly. I’ve uploaded the archive to my own blob storage account which I then download it from.