Hey,
Thanks for your reply.
Yes I am training a TLT Classification network
model_config {
arch: "resnet",
n_layers: 10
use_bias: False
use_batch_norm: True
all_projections: False
freeze_blocks: 0
freeze_blocks: 1
freeze_blocks: 2
freeze_blocks: 3
# image size should be "3,H,W", where H,W >= 16
input_image_size: "3,224,224"
}
train_config {
train_dataset_path: "/data/train"
val_dataset_path: "/data/valid"
pretrained_model_path: "/workspace/experiment/pretrained/tlt_pretrained_classification_vresnet10/resnet_10.hdf5"
optimizer: "sgd"
batch_size_per_gpu: 256
n_epochs: 60
n_workers: 16
# regularizer
reg_config {
type: "L2"
scope: "Conv2D,Dense"
weight_decay: 0.00005
}
# learning_rate
lr_config {
scheduler: "step"
learning_rate: 0.0005
step_size: 6
gamma: 0.3
}
}
So the problem I am facing is that when I separate the objects into classes say ‘Left’ and ‘Right’, my accuracy is always 50% but when I group the classes that are opposite (eg: left and right become 1 class, top and bottom become 1 class, etc), the accuracy goes very high.
Based on this observation, I want to verify if there is any flipping augmentation happening during the training. Because my data is direction specific, flipping would affect the accuracy.
Thanks.