Hello, I can train 512 following guides on here but it seems to be specific to the v1 base model - is there a v2 model I can download and use as the base for the 512 version?
many thanks
Hi,
Would you mind sharing the guide you mentioned with us first?
Do you use jetson-inference ?
If yes, you can modify the model data for MobileNet v2 to train directly.
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg" width="100%">
<p align="right"><sup><a href="pytorch-collect.md">Back</a> | <a href="pytorch-collect-detection.md">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
<br/>
<sup>Transfer Learning - Object Detection</sup></s></p>
# Re-training SSD-Mobilenet
Next, we'll train our own SSD-Mobilenet object detection model using PyTorch and the [Open Images](https://storage.googleapis.com/openimages/web/visualizer/index.html?set=train&type=detection&c=%2Fm%2F06l9r) dataset. SSD-Mobilenet is a popular network architecture for realtime object detection on mobile and embedded devices that combines the [SSD-300](https://arxiv.org/abs/1512.02325) Single-Shot MultiBox Detector with a [Mobilenet](https://arxiv.org/abs/1704.04861) backbone.
<a href="https://arxiv.org/abs/1512.02325"><img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/pytorch-ssd-mobilenet.jpg"></a>
In the example below, we'll train a custom detection model that locates 8 different varieties of fruit, although you are welcome to pick from any of the [600 classes](https://github.com/dusty-nv/pytorch-ssd/blob/master/open_images_classes.txt) in the Open Images dataset to train your model on. You can visually browse the dataset [here](https://storage.googleapis.com/openimages/web/visualizer/index.html?set=train&type=detection).
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/pytorch-fruit.jpg">
To get started, first make sure that you have [JetPack 4.4](https://developer.nvidia.com/embedded/jetpack) (or newer) and [PyTorch installed](pytorch-transfer-learning.md#installing-pytorch) for **Python 3** on your Jetson. JetPack 4.4 includes TensorRT 7.1, which is the minimum TensorRT version that supports loading SSD-Mobilenet via ONNX. Newer versions of TensorRT are fine too.
## Setup
The PyTorch code for training SSD-Mobilenet is found in the repo under [`jetson-inference/python/training/detection/ssd`](https://github.com/dusty-nv/pytorch-ssd). If you aren't [Running the Docker Container](aux-docker.md), there are a couple steps required before using it:
This file has been truncated. show original
Thanks.
Hi @liellplane , you can download other base models for train_ssd.py here: https://drive.google.com/drive/folders/1pKn-RifvJGWiOx0ZCRLtCXM5GT5lAluu
I haven’t tested SSD-Mobilenet-v2 with higher resolutions though - for starters, you would need to add config.set_image_size(args.resolution)
under this line of code:
config = mobilenetv1_ssd_config
config.set_image_size(args.resolution)
elif args.net == 'mb1-ssd-lite':
create_net = create_mobilenetv1_ssd_lite
config = mobilenetv1_ssd_config
elif args.net == 'sq-ssd-lite':
create_net = create_squeezenet_ssd_lite
config = squeezenet_ssd_config
elif args.net == 'mb2-ssd-lite':
create_net = lambda num: create_mobilenetv2_ssd_lite(num, width_mult=args.mb2_width_mult)
config = mobilenetv1_ssd_config
else:
logging.fatal("The net type is wrong.")
parser.print_help(sys.stderr)
sys.exit(1)
# create data transforms for train/test/val
train_transform = TrainAugmentation(config.image_size, config.image_mean, config.image_std)
target_transform = MatchPrior(config.priors, config.center_variance,
config.size_variance, 0.5)
Thanks AastaLLL, its not really a guide it was various topics on here such as :
How train jetson-inference ssd512 model - Jetson & Embedded Systems / Jetson TX2 - NVIDIA Developer Forums
Thanks for that link! OK will start there - in your opinion is there much difference in performance between V1 and V2 on a Xavier to warrant the effort to use V2?
@liellplane , no in my basic testing of SSD-Mobilenet-v1 vs SSD-Mobilenet-v2 on Xavier NX, there were no real appreciable differences (SSD-Mobilenet-v2 was a couple percent slower actually), so I just stuck with using SSD-Mobilenet-v1 since it is very stable to train and deploy at this point.
very helpful info, thanks!!
system
Closed
June 7, 2023, 12:52am
10
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.