Hi
I have trained an SSD-Mobilenet-v1 on a custom dataset, and I want to Inference that on Jetson TX1 with CUDA. I have to convert the SSD model (contains checkpoints) to TensorRT for inference On jetson. (Training performed on Windows 10).
now I want to install tensorflow on Jetson TX1 for inference and converting to TensorRT(before that, converting to .onnx format)
but I can’t install tensorflow on Jetson TX1, I try multiple ways to install but nothing going to success…
My Environment is:
but when I try pip install ./tensorflow-2.7.0+nv22.1-cp36-cp36m-linux_aarch64.whl
I get this error:
ERROR: No matching distribution found for h5py
seems like there is no matching distribution for python3.6 h5py requires python >= 3.7
I try installing python3.7 but every library that has been installed by the SDK manager is on python 3.6 and cannot install libraries on Python3.7…
also when I want to install tensorflow with python3.7 (`sudo python3.7 -m pip install TensorFlow) I got the same error “no matching dist…”)
Why is it so painful to install this library?!
how can I install the tensorflow in an appropriate way?
@AastaLLL
Thank you for your response, I try that and tell you if worked. 🌸 @dusty_nv
hey, dusty how are you doing?
dusty, I have downloaded the pre-trained ssd-mobile net from this link and fine-tuned with the custom dataset (face mask detection), now I have multiple checkpoint files. When I want to infer on (Windows), I load the checkpoints directly and make a prediction with the object_detection library.
now I want to infer the trained model to jetson tx1
how can I do that?
I have inference that with a normal tensorflow library (run on CPU) and a gave 5 fps which is very slow
I want to convert the model (checkpoint files) to tensorrt and inference the model…
can you help me with how I can do that?
@Hamzeh.nv the tool that I had used to convert the TensorFlow checkpoints to UFF was @AastaLLL’s project here:
However it has been a few years now since doing this and I personally no longer work with TensorFlow/UFF models in lieu of ONNX and PyTorch, so YMMV. UFF support has been deprecated in TensorRT. There is TensorRT documentation about the UFF tools here: https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/uff/uff.html