In “Collecting your own Classification Datasets”,
In train.py, normalize RGB first.
Train data are cropped randomly with a random aspect ratio. In addition to that, it flips randomly.
#transforms.Resize(224), transforms.RandomResizedCrop(args.resolution), transforms.RandomHorizontalFlip(),
Val data is resized to 256 on the short side with fixed aspect ratio and cropped on the long side.
Are these perceptions correct?
When using imagenet.py --model=$NET/resnet18.onnx,
what kind of image preprocessing is being done?
I understand that there is RGB normalization.
I want to train and run using the entire image with varying aspect ratio without cropping.
How can I do this?