I’m trying to train my Jetson Nano to recognise 5 of my own classes as per this:
I have about 90 images in the training set for each class and 15 images in each validation set and 0 images in the test set. I train to 100 epochs (I should have checked the accuracy to see if it converges but haven’t yet). In each of the training and validation sets, I have 3 different backgrounds (basically a different coloured T-shirt).
I am using these as my classes:
- apple
- gameboy
- honey
- suncream
- toothpaste
When I export to onnx and run my model, I am sad to say it is nearly always 24% confident it is an apple, no matter what I present to the camera. It flickers to honey occasionally and if I present a gameboy, it does briefly flicker to gameboy but only once in a while.
The point stands that the confidence is always 24.XY% whatever the class.
What am I doing wrong? Do I need more training images? More validation images? Should I have used test images?
I’d like to check I am on the right path before I go and take another 5x100 = 500 images to see if that improves the training as this will take me the best part of an hour.
Many thanks.