Hello, I am currently learning and using transfer learning to train a custom model for object detection (SSD-MobileNet) for DetectNet following the procedure on jetson-inference using a Jetson Orin Nano. When the training is finnished, converted to onnx, and then ran through TensorRT, can I send that trained models information to be able to use it on a Jetson Nano? or are they configured differently because of the architecture?
also would i be able to use the most recent weights to train the pytorch model on similar but new data? or would you recommend putting all of the previous data and new data together and starting over? im trying to get more data in areas i need, to make the model more robust.
@jcass358 you should be able to use the --resume=CHECKPOINT (or --pretrained-ssd) arguments to load one of your previous model checkpoints and re-train it on a new/different dataset. You can also specify multiple dataset directories should you want to train on all of them at once.
It’s probably worth trying both ways (training on all your data “from scratch”, and incremental training on new data) to see which performs the best for you. You can also run with the --validate-mean-ap flag and it will compute the per-class accuracies after each epoch, which will allow you to keep a closer eye on how your model is performing and which approach turns out better.
hey Dusty one more question, does the model keep the best mean ap or would i have to specify which weights i want that had the highest mAP to convert to onnx?
@jcass358 every model checkpoint is saved, but the onnx_export.py script automatically selects the model with the lowest loss (unless you manually point it to a specific checkpoint). So you could specify your checkpoint with the best mAP, or modify the scripts to do that for you.