What is the best way to implement models trained in Tensorflow object detection API on Nano?

I installed Tensorflow on Nano, according to the instructions on Nvidia website and then tried to install object detection API in a regular way as I do it on Windows, but with no luck and to be honest I do know any other way to do it.

I wonder what would be the best way to implement custom trained models from TF object detection API to a Python script on Nano?


Could you share the error or issue you met when installing the TensorFlow object detection API?
We can check if more information about the failure is provided.

Another recommended way to convert the model into an ONNX format, and deploy it with TensorRT.
This could save you memory and also has a better performance.


The issue was that while installing the API with the setup.py, pip was having a dependencies issues and the way to solve it was that pip was installing all the versions of the packages such as pyarrow, pandas and more and checking if every particular installed version of a package works and then move to the next one. So for example it downloaded and installed all the versions of pandas from 1.3.0 thorugh 1.2.5, 1.2.4, 1.2.3, 1.2.2 etc. down to like 0.7.0 or something like that. At first I used 32GB but this installation used all the space, then I bought 64GB and I canceled it after 8 hours when it was still not done. I have never seen something like this from pip before.

Maybe I should let it continue?

Anyways, I will also try to do it with suggested ONNX/TensorRT method.
Thank you

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.