Regarding pretrained models for object detection

sir
I,m doing a project . I want to create my own detection models for rice seedlings. as a part of the project one of the independent parameter is pretrained model. so while during training how can I give various pretrained models . I searched a lot to get the pth files but only ssd mobilenet v1 and ssd mb2 lite is available. vgg16 ssd is available but it is giving result nan.pth

Hi,

Do you run the model on Nano?
If yes, is there any error when inference?

Thanks.

yes I run the model on Nano. no error during inference

Hi @jino.joy.m, I’m assuming you are referring to the jetson-inference object detection training and train_ssd.py - in that case, the only model that is tested/working with ONNX+TensorRT is the ssd-mobilenet-v1, so please use that. The vgg16 was left over from the upstream fork of that train_ssd.py repo and is unvalidated.

yes I’m using it. there is also one model ssd-mobilenet lite v2. It was working. Is there any another way to use other models for object detection ?

The ssd-mobilenet-v1 is the only one that I’ve validated to be working with ONNX export + import into TensorRT. The other models may work with PyTorch, but it’s that PyTorch->ONNX->TensorRT pipeline that is needed to be working to deploy it.

How can I use that pipeline for detection?

You would use the ssd-mobilenet-v1 model that’s used in this tutorial:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md

That network is able to exported to ONNX from PyTorch and run with TensorRT via the detectnet/detectnet.py program.

sir leaving ssd-mobilenet -v1 is there any other method so that I could make a model with other pre-trained models.Since I should have 3 pretrained models as independent parameters for the project.

Is it part of your project that you need to deploy it with TensorRT for realtime inference? If not, you could use different YOLO models or torchvision object detectors. These aren’t in jetson-inference but you may not need that. Typically these models are just trained on an x86 system or in the cloud. For training+deployment on Nano there are special considerations, such as memory usage - Nano may not have enough memory to train other detection models since they are complex.

No sir that it is not a part . I just only want to compare the models for my project of object detection. But I don’t know where to start since I’m new to this world. I want two more models. can you help me?

I would check out these object detection models in PyTorch / torchvision:

https://pytorch.org/vision/stable/models.html#object-detection-instance-segmentation-and-person-keypoint-detection

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.