The SSD-Mobilenet and SSD-Inception models are currently trained with TensorFlow (which is why they do not appear yet in the training portion of the Hello AI World tutorial, which uses PyTorch). So you could re-train these models on your dataset with the TensorFlow Object Detection API (ideally running on a PC/server/cloud instance, or it might run on the Nano - albeit slowly and with extra swap space). Here are some resources about that:
- https://medium.com/swlh/nvidia-jetson-nano-custom-object-detection-from-scratch-using-tensorflow-and-opencv-113fe4dba134
- https://jkjung-avt.github.io/hand-detection-tutorial/
- https://jkjung-avt.github.io/object-detection-tutorial/
- https://devtalk.nvidia.com/default/topic/1056054/jetson-tx2/how-to-retrain-ssd_inception_v2_coco_2017_11_17-from-the-tensorrt-samples/post/5397203/#5397203
Alternatively, you could install/run DIGITS on a PC and use it to train DetectNet, however the SSD-based models above have improved performance.