Deploy a pretrained custom model in jetson nano

Hello! I’m a beginner in Jetson NANO. I have already trained my custom object detection model on windows (i used tensorflow 1.15.0) on my own pc. I’d like to deploy this model in order to use it on the Jetson Nano. How can i do?
Thanks.

Hi,

You can install the TensorFlow package with the following instructions.
Then deploy it in the same way as you did on the desktop environment.

Thanks.

Hi, Thank you! I deployed the model like i did on the desktop environment, and it works!
The problem is that it takes 8 minutes to run the detection code on only one image on the jetson nano!!
what can i do to reduce the excecution time?

Hi,

In general, an inference pipeline takes some initial time for preparing the model and setting up the environment.

Have you tried to launch several images at once?
You should get much better performance on the following inference.

Thanks.

Hello!
Thank you for your response!
Acctully yes, when i try it on a video it works fine untel the frame number 400 and then it got stack!
Now i’m using the jetson xavier nx and i have the same problem: after 822 frames it stack. I am trying to get better performence by using cuda. The installation looks fine but i don’t see any improuvement.
let me please summarize what i did: i deployed a tensorflow model for object detection trained on windows on custom data on my jetson xavier nx. i runed the code every thing looks ok until the frame number 822 then it becomes so slow and then it got stack.
Please how can i make my thensorflow model run faster using cuda or anything else on jetson xavier?
I am using: python 3.9.6, jetpack 4.4.1, tensorflow 1.15.2, opencv 4.5.5, numpy 1.19.4, scipy 1.5.4, cuda 10.2, Gudnn:libcudnn.so.8.0.0

Hi,
I’m also a beginner, and I have similar task.
But what I’ve seen the object detection on Jetson nano runs much better with TensorRT that Tensorflow.

1 Like

Thank you!
I’ll try TensorRT but i need cuda too and anything else that may make the model run faster.
I need to reach the highest perfomance of my model.

Let me know if you succeed. I was able to run TensorRT only with models that were already in package.

I am unsure how to use own model for object detection. I understood that you need .uff or .onnx file and then build an engine to run it on Jetson Nano.

I also have a custom model saved in file .pb (frozen graph), and I managed to convert it to .uff file. But got stucked here.

Apparently .uff is old format, and using .onnx is more suggested. But I havent been able to find any step-by-step tutorial how to do this (object detection from .pb file on Jetson Nano).

Hi,

Would you mind testing your model with another video to see if it also gets slow at frame #822?
If yes, it’s recommended to check the memory utilization with tegrastats to see if any leakage occurs.
If not, this might be a scene-dependent issue.
It’s recommended to check if any detection at that frame might cause an issue.

More, not sure if you have tried that.
You can maximize the device performance with the following commands first.

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

Thanks.

Hi,

We have some examples for converting the TensorFlow-based model into TensorRT.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.