Helo everyone,
I trained network (Xception) from Keras library and converted it to ONNX model, now I want to use it on Jetson Nano to run live camera used this command:
@yazan.doha it appears that the input layer to your model is called input and the output layer is called predictions. So you will want to run imagenet.py with --input_blob=input --output-blob=predictions instead.
Hey @dusty_nv ,
I changed the inputs and outputs so the camera now works fine, but I had two problems:
1- The camera works slowly, not like before.
2- The accuracy values are weird, namely (01.40%, 01.30%, 01.26%, …)
What is wrong here?
Is there another way to use ONNX form on Jetson nano for live cam??
I would like to ask again, does it matter that this network (Xception) is not available for installation in the Model Downloader?
It would appear that the computational runtime of your Xception network takes longer than the other classification models used for realtime (like resnet-18, resnet-50, googlenet, ect), and hence the application runs slower.
I would check that the pre-processing that I linked to above matches that as to what your model uses during training (in Keras). Namely that the mean-pixel and normalization coefficients are the same and that the data channel layout is the same.
The ONNX classification models that jetson-inference is configured to use are PyTorch models that are trained with the train.py from the repo, and it uses those coefficients. I don’t officially support using arbitrary models so you may need to set it up differently.
Hey @dusty_nv,
1- By arbitrary models do you mean that it is better to train a network on PyTorch than on TensorFlow, I mean when i use PyTorch with the same pre-trained Network (Xception) i don’t have to set up the coefficients or?? than convert it in ONNX form to deploy it on Jetson Nano???
2- I’m not aware of that, I mean I have no idea how to set up the coefficients
Maybe there are other ways to use ONNX such as a Python script for Live Video that helps me control the video stream to fit my network(Xception), do you have any idea or example to do this??
3- What about this Command:
$ /usr/src/tensorrt/bin/trtexec --onnx=[your/model] --explicitBatch --optShapes=[name]:[NxCxHxW] --verbose ?? model_kleber_neu.onnx (87.4 MB)
Hi @yazan.doha, I mean that the jetson-inference imageNet code is setup to expect certain kind of ONNX models (namely, those that are trained with train.py like in my tutorial, which uses PyTorch). Hence the pre-processing that jetson-inference does uses the same coefficients that PyTorch does during training the model. Any standalone Python script that runs your ONNX model is going to need setup with the same pre-processing as Keras did in order for the results to be the same.
Hey @dusty_nv ,
yes you are right, but when I train my pre-trained Network directly on Jetson Naon with train.py as you in your tutorial mentioned i don’t have a good accuracy more than 60% and I can not convert the images so that the model can handle them better.
Should I use PyTorch and pre-trained Network (specified) on separate CPU then convert it to ONNX then use it on Jetson nano??
Whether you run the training on a PC or Jetson, that doesn’t impact the accuracy of your trained model. To increase the accuracy during training, typically you would increase the number of training epochs and/or add images to your dataset.
Hi @dusty_nv,
I trained my network on Jetson Nano(2 GB) today with the command:
$ python3 train.py --model-dir=models/Yazan data/Yazan --batch-size=8 --workers=2 --epochs=30
for the dataset(Train: 1000 Images, Val:500 Images, Test:150 Images).
The highest accuracy I had was 71.374 % at epoch 27, then dropped to 56.493 in the last epoch (num:29). yaz.txt (120.4 KB)
So change that to lr = args.lr * (0.1 ** (epoch // 25)) instead
I’m not sure however that it will result in the accuracy continuing to increase or not.
Typically you could try using resnet-34 or resnet-50 instead of resnet-18, however I don’t know if Nano 2GB has enough memory for that or not.