How to use different models for training on Nano

Hi

I have been following Jetson Inference and have successfully trained a model using train_ssd.py. The model trained is ssd mobilenet model. I wanted to know how can we use any other model (network) and train it for our own custom dataset. Is there any Jetson model zoo available from which we can download the models. Also the train_ssd.py works only for ssd mobilenet models or can we use this same script to train for other models as well?

Hi,

It supports several model architectures.
Please find the details below:

In general, you can use it with a PyTorch-based model but some updates might be required in the output parsing.
For more details, please check the comment below.

Thanks.

SSD-Mobilenet-v1 is the only network architecture from train_ssd.py that I’ve validated to be working through the whole pipeline (training + ONNX export from PyTorch + ONNX import to TensorRT).

If you wanted to use another model for object detection like YOLO, here are some resources that use TensorRT for inferencing:

The Hello AI World / jetson-inference project also contains other training scripts / tutorials for image classification and semantic segmentation. These are different than the scripts used for object detection.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.