Hello, im trying to use a different custom model for DetectNet and when i type ‘python3 train_ssd.py --help’
i see that i can use these models:
–net NET The network architecture, it can be mb1-ssd,
mb1-ssd-lite, mb2-ssd-lite or vgg16-ssd.
but when i try to type this argument it doesnt work.
i see that you have to download something like this for it but this one is for ssd mobilenet :wget https://nvidia.box.com/shared/static/djf5w54rjvpqocsiztzaandq1m3avr7c.pth -O models/mobilenet-v1-ssd-mp-0_675.pth
where can i get the one for mb2-ssd-lite? or the other ones listed here? im trying to use different architectures to see which one has better object detection.
also, would i be able to do something similar and run other networks that are pytorch that arent on here? like YOLO v8 or would i not be able to convert over to DetectNet?
Not all are guaranteed to work with the ONNX export + TensorRT import. I recall trying mb2-ssd-lite but it didn’t really perform better than the default ssd-mobilenet-v1.
I don’t support YOLO in jetson-inference because there are too many variants for me to conceivably support, however there are a number of resources available for deploying YOLOv8 with TensorRT: