Training VGG16 on Custom Dataset

Hi,
I am training a vgg16-ssd model on a custom dataset using google colab. I have trained 120 epochs and my loss is 1.81. On the same dataset i have trained a mobilenetv2 ssd model and my loss is 1.76. My vgg16 model detects nothing(literally there is no detection) but mobilenet detects something like 50% mAP score in traing set. My question is that i thought vgg16 is more powerfull model than the mobilenet, is this correct? If this is correct why my vgg16 model has a extremely poor performance? Should i train vgg16 more, is model worth to train? I am training vgg16 like this maybe the way i am training vgg16 is not correct.
!python3 train_ssd.py --net=vgg16-ssd --pretrained-ssd=models/vgg16-ssd-mp-0_7726.pth --data=data/mydataset --model-dir=models/mymodels-vgg --dataset-type=voc --batch-size=32 --epochs=120

Hi @mglaaa, I haven’t trained vgg16-ssd before (that model came from the upstream author of pytorch-ssd), so I’m not sure if/how it works or not. From this paper, vgg16 only has marginal improvement over mobilenet anyways, while being slower.

I would stick with ssd-mobilenet and if you need higher accuracy, to instead look at the 512x512 variant:

Thank you @dusty_nv . I will give it a try.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.