Hi,
I am planning to train a pretrained Tensorflow model (Efficientnet) for object detection and use it on a Jetson Nano.
However, I am unsure if this is possible? I see that there are some different ways to convert different model formats back and forth.
But what is the best practice in order to do this?
Thanks for any help and tips!
Hi,
You can inference it with TensorFlow on Nano directly.
The package can be installed via the instructions below:
But if you want an optimal performance on Nano, it’s recommended to convert the model into TensorRT.
You can find an example in the below folder:
/usr/src/tensorrt/samples/python/end_to_end_tensorflow_mnist
Thanks.
Thank you for your reply. The problem is that I am using Tensorflow Object Detection. Which, as far as I experienced was not possible on the Nano?
Hi,
Suppose you should be able to install Tensorflow Object Detection on Jetson.
Do you meet any error when doing so?
Thanks.
Seems that I got it working. But the loading time is about 220 seconds. Do you think there is there anyway I can improve that?
Hi,
If TensorRT is an option for you, it’s recommended to convert the model into TensorRT.
It has optimized for the Jetson environment.
Thanks.
system
Closed
September 5, 2021, 3:37am
9
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.