Maybe the answer is obvious to some, but my questions are when using my own trained jetson-inference model:
- does it matter what angle of rotation my image is when I try to classify it?
- does the size of my image matter when I try to classify it?
If the answer to these questions is yes, should I always be scaling and rotating my input images to match those that the model was trained on?
Additional question while I’m here: by default train.py uses ResNet-18. Should I use a different base model and what determines whether I should? (using train.py --arch)
Thanks.
Hi,
This question is not related to the jetson-inference.
It depends on the training data you used.
If the training image covers different rotation and scaling, then the model should have the ability.
So for the pre-trained model, please check the dataset listed on the below page to get the information:
Thanks
1 Like
Okay thanks. So basically I need to train my model with the object a set distance from the camera. Then, for testing I must make sure I scale the object to the same size as it was in the training images before classifying it.
The more robust you want your model to be against variants in camera angle/distance, object orientation/rotation, lighting conditions, different backgrounds, ect. the more training data you will need to collect. If you consider the data collection/training I did in these videos, I kept the camera fixed but moved the objects around in various positions (and collected only 100 images per object).
1 Like
Thanks, it looks like I need a lot more photos.
Is there a way of telling how many photos are needed for training?
Is the trick to basically watch the accuracy figure during each epoch to see when it converges and to what value? (I see that you have accuracy vs epoch graphs in all the tutorials).
I am guessing there comes a point where adding more photos does not add to the accuracy of the model. I realise these might be general AI questions so I’m happy to do further reading if you were to point me in the right direction. :-)
Hi @jetsonnvidia, sorry for the delay - it generally depends on the complexity of the data you are trying to classify, how robust it is against different scenarios, and the accuracy you are trying to achieve.
Yes, you can watch the accuracy until it plateaus/converges or reaches your desired value. For example, the cat/dog model reached a bit more than 80% accuracy.
1 Like