Maybe the answer is obvious to some, but my questions are when using my own trained jetson-inference model:
- does it matter what angle of rotation my image is when I try to classify it?
- does the size of my image matter when I try to classify it?
If the answer to these questions is yes, should I always be scaling and rotating my input images to match those that the model was trained on?
Additional question while I’m here: by default train.py uses ResNet-18. Should I use a different base model and what determines whether I should? (using train.py --arch)