There is no way to install tensorflow on Jetson TX1

Hi
I have trained an SSD-Mobilenet-v1 on a custom dataset, and I want to Inference that on Jetson TX1 with CUDA. I have to convert the SSD model (contains checkpoints) to TensorRT for inference On jetson. (Training performed on Windows 10).
now I want to install tensorflow on Jetson TX1 for inference and converting to TensorRT(before that, converting to .onnx format)
but I can’t install tensorflow on Jetson TX1, I try multiple ways to install but nothing going to success…
My Environment is:

Board: Nvidia Jetson TX1
Ubuntu: 18.04
JetPack: 4.6.3
Python 3.6.9
CUDA: 10.2
cuDNN: 8.2
TensorRT:8.2

What I tried?
I try `pip install TensorFlow but I get this error:

ERROR: No Matching distribution found for tensorflow

I download the .whl file of tensorflow from the Nvidia repository this link
file name:

tensorflow-2.7.0+nv22.1-cp36-cp36m-linux_aarch64.whl

but when I try pip install ./tensorflow-2.7.0+nv22.1-cp36-cp36m-linux_aarch64.whl
I get this error:

ERROR: No matching distribution found for h5py

seems like there is no matching distribution for python3.6
h5py requires python >= 3.7
I try installing python3.7 but every library that has been installed by the SDK manager is on python 3.6 and cannot install libraries on Python3.7…
also when I want to install tensorflow with python3.7 (`sudo python3.7 -m pip install TensorFlow) I got the same error “no matching dist…”)

Why is it so painful to install this library?!

how can I install the tensorflow in an appropriate way?

Hi,

Please stay in python 3.6 for better compatibility since it is the default version in Ubuntu 18.04.

You can find the detailed installation steps for JetPack 4.6.3 in the below topic:

Thanks.

@AastaLLL
Thank you for your response, I try that and tell you if worked. 🌸
@dusty_nv
hey, dusty how are you doing?
dusty, I have downloaded the pre-trained ssd-mobile net from this link and fine-tuned with the custom dataset (face mask detection), now I have multiple checkpoint files. When I want to infer on (Windows), I load the checkpoints directly and make a prediction with the object_detection library.
now I want to infer the trained model to jetson tx1
how can I do that?
I have inference that with a normal tensorflow library (run on CPU) and a gave 5 fps which is very slow
I want to convert the model (checkpoint files) to tensorrt and inference the model…
can you help me with how I can do that?

@Hamzeh.nv the tool that I had used to convert the TensorFlow checkpoints to UFF was @AastaLLL’s project here:

However it has been a few years now since doing this and I personally no longer work with TensorFlow/UFF models in lieu of ONNX and PyTorch, so YMMV. UFF support has been deprecated in TensorRT. There is TensorRT documentation about the UFF tools here: https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/uff/uff.html

You might also find TF-TRT to be a convenient option for deploying TensorFlow models to TensorRT: https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html

@dusty_nv
thank you for your help dusty.
I hope I can convert the model.
If I have any other questions, I will ask you here
Thankful 🌸🌸🌸

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.