ONNX Model auf Jetson Nano Using

Helo everyone,
I trained network (Xception) from Keras library and converted it to ONNX model, now I want to use it on Jetson Nano to run live camera used this command:

(imagenet.py --model=models/Yazan/model_kleber_neu.onnx --input_blob=input_0 --output_blob=output_0 --labels=data/Yazan/labels.txt csi://0)

but I have problem here:


does it matter that this network (Xception) is not available to install in Model Downloader??
can you help me please??

Hi,

Do you want to use a custom model with jetson-inference?
If yes, please make sure the below information is given correctly:

Thanks.

@yazan.doha it appears that the input layer to your model is called input and the output layer is called predictions. So you will want to run imagenet.py with --input_blob=input --output-blob=predictions instead.

Also, you should check that the pre-processing coefficients for mean-pixel subtraction and normalization are the same as Keras used here: https://github.com/dusty-nv/jetson-inference/blob/9ee9a950a80fbc7597d7e78a7ba0a282e85fae78/c/imageNet.cpp#L320

Hey @dusty_nv ,
I changed the inputs and outputs so the camera now works fine, but I had two problems:
1- The camera works slowly, not like before.
2- The accuracy values are weird, namely (01.40%, 01.30%, 01.26%, …)
What is wrong here?

Is there another way to use ONNX form on Jetson nano for live cam??

I would like to ask again, does it matter that this network (Xception) is not available for installation in the Model Downloader?




It would appear that the computational runtime of your Xception network takes longer than the other classification models used for realtime (like resnet-18, resnet-50, googlenet, ect), and hence the application runs slower.

I would check that the pre-processing that I linked to above matches that as to what your model uses during training (in Keras). Namely that the mean-pixel and normalization coefficients are the same and that the data channel layout is the same.

The ONNX classification models that jetson-inference is configured to use are PyTorch models that are trained with the train.py from the repo, and it uses those coefficients. I don’t officially support using arbitrary models so you may need to set it up differently.

Hey @dusty_nv,
1- By arbitrary models do you mean that it is better to train a network on PyTorch than on TensorFlow, I mean when i use PyTorch with the same pre-trained Network (Xception) i don’t have to set up the coefficients or?? than convert it in ONNX form to deploy it on Jetson Nano???

2- I’m not aware of that, I mean I have no idea how to set up the coefficients
Maybe there are other ways to use ONNX such as a Python script for Live Video that helps me control the video stream to fit my network(Xception), do you have any idea or example to do this??
3- What about this Command:
$ /usr/src/tensorrt/bin/trtexec --onnx=[your/model] --explicitBatch --optShapes=[name]:[NxCxHxW] --verbose ??
model_kleber_neu.onnx (87.4 MB)

Hi @yazan.doha, I mean that the jetson-inference imageNet code is setup to expect certain kind of ONNX models (namely, those that are trained with train.py like in my tutorial, which uses PyTorch). Hence the pre-processing that jetson-inference does uses the same coefficients that PyTorch does during training the model. Any standalone Python script that runs your ONNX model is going to need setup with the same pre-processing as Keras did in order for the results to be the same.

Hey @dusty_nv ,
yes you are right, but when I train my pre-trained Network directly on Jetson Naon with train.py as you in your tutorial mentioned i don’t have a good accuracy more than 60% and I can not convert the images so that the model can handle them better.
Should I use PyTorch and pre-trained Network (specified) on separate CPU then convert it to ONNX then use it on Jetson nano??

Whether you run the training on a PC or Jetson, that doesn’t impact the accuracy of your trained model. To increase the accuracy during training, typically you would increase the number of training epochs and/or add images to your dataset.

Hey @dusty_nv ,
I want to know if you trained your model for 100 epochs as mentioned here: * Box
on Jetson Nano as mentioned here: Jetson AI Fundamentals - S3E3 - Training Image Classification Models - YouTube or on an external Plattform??

Hi @yazan.doha, I trained it on a Jetson, for 30 epochs if I recall correctly (in the video I just show 1 epoch to keep the video shorter)

Hi @dusty_nv,
I trained my network on Jetson Nano(2 GB) today with the command:
$ python3 train.py --model-dir=models/Yazan data/Yazan --batch-size=8 --workers=2 --epochs=30
for the dataset(Train: 1000 Images, Val:500 Images, Test:150 Images).

The highest accuracy I had was 71.374 % at epoch 27, then dropped to 56.493 in the last epoch (num:29).
yaz.txt (120.4 KB)

Do you have a suggestion to improve accuracy?

Since the accuracy dropped a lot after epoch 27, you might want to try decaying the learning rate after epoch 25 instead of epoch 30 like the code does here: https://github.com/dusty-nv/pytorch-classification/blob/dd2548357acb46e376543a3475942075f4a5ce88/train.py#L495

So change that to lr = args.lr * (0.1 ** (epoch // 25)) instead
I’m not sure however that it will result in the accuracy continuing to increase or not.

Typically you could try using resnet-34 or resnet-50 instead of resnet-18, however I don’t know if Nano 2GB has enough memory for that or not.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.