I recently trained a custom ssd mobilenet model. Below is how the training graph looks like
As we can see, there are lots of small peaks where the loss increase and then decreased. We can see 6-7 peaks like this. As per my understanding, the loss should keep on decreasing. Is there any reason for such graph?
Would you mind using a different step size or optimization approach to see if it helps?
Can you please explain how can I define step size or other optimizations? I am only using train_ssd.py to train. Although the model is performing fine but I just wanted to understand the training graph that’s why I posted the question.
Hi @ART97, there are a bunch of learning rate options and optimizer options that you can set on the command-line to train_ssd.py found here: https://github.com/dusty-nv/pytorch-ssd/blob/21383204c68846bfff95acbbd93d39914a77c707/train_ssd.py#L60
Although admittedly I haven’t messed with these or run this for more than 100 epochs. I believe that may lead to overfitting your model (i.e. attaining the lowest possible loss on your training set doesn’t always generalize the better real-world performance)