Access to ssd-mobilenetv2 lite

Hello, im trying to use a different custom model for DetectNet and when i type ‘python3 train_ssd.py --help’
i see that i can use these models:
–net NET The network architecture, it can be mb1-ssd,
mb1-ssd-lite, mb2-ssd-lite or vgg16-ssd.
but when i try to type this argument it doesnt work.
i see that you have to download something like this for it but this one is for ssd mobilenet :wget https://nvidia.box.com/shared/static/djf5w54rjvpqocsiztzaandq1m3avr7c.pth -O models/mobilenet-v1-ssd-mp-0_675.pth

where can i get the one for mb2-ssd-lite? or the other ones listed here? im trying to use different architectures to see which one has better object detection.

also, would i be able to do something similar and run other networks that are pytorch that arent on here? like YOLO v8 or would i not be able to convert over to DetectNet?

thank you

The other base models can be found in the upstream pytorch-ssd repo here:

Not all are guaranteed to work with the ONNX export + TensorRT import. I recall trying mb2-ssd-lite but it didn’t really perform better than the default ssd-mobilenet-v1.

The TAO Detection models are supported and are optimized (and you can train your own of those using TAO Toolkit on x86) - https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-tao.md

I don’t support YOLO in jetson-inference because there are too many variants for me to conceivably support, however there are a number of resources available for deploying YOLOv8 with TensorRT:

Thank you Dusty!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.