Train custom model on jetson orin nano using jetson-inference, deploying on jetson nano

Hello, I am currently learning and using transfer learning to train a custom model for object detection (SSD-MobileNet) for DetectNet following the procedure on jetson-inference using a Jetson Orin Nano. When the training is finnished, converted to onnx, and then ran through TensorRT, can I send that trained models information to be able to use it on a Jetson Nano? or are they configured differently because of the architecture?

thank you

Hi,

You can send the ONNX model to other platforms.
But since the TensorRT engine optimizes based on the hardware environment, it is not portable.

Thanks.

1 Like

Thank you AastaLLL.

also would i be able to use the most recent weights to train the pytorch model on similar but new data? or would you recommend putting all of the previous data and new data together and starting over? im trying to get more data in areas i need, to make the model more robust.

Hi,

Training on Jetson is similar to the way as desktop GPU.
This answer depends on if your training work can memorize the previous data or not.

Thanks.

@jcass358 you should be able to use the --resume=CHECKPOINT (or --pretrained-ssd) arguments to load one of your previous model checkpoints and re-train it on a new/different dataset. You can also specify multiple dataset directories should you want to train on all of them at once.

It’s probably worth trying both ways (training on all your data “from scratch”, and incremental training on new data) to see which performs the best for you. You can also run with the --validate-mean-ap flag and it will compute the per-class accuracies after each epoch, which will allow you to keep a closer eye on how your model is performing and which approach turns out better.

1 Like

Thank you so much Dusty, this is perfect for my situation right now.

hey Dusty one more question, does the model keep the best mean ap or would i have to specify which weights i want that had the highest mAP to convert to onnx?

@jcass358 every model checkpoint is saved, but the onnx_export.py script automatically selects the model with the lowest loss (unless you manually point it to a specific checkpoint). So you could specify your checkpoint with the best mAP, or modify the scripts to do that for you.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.