Inference erron in classification

I trained the classification on 1920, 1376 image size, pruned it without any error. But, when doing retrain, it give me this error:

“ValueError: Error when checking input: expected input_1 to have shape (3, 224, 224) but got array with shape (3, 1920, 1376)”
What should I do to fix this?

the spec file for both train and retrain have 1920, 1376 as input image size:

retrain:

model_config {
arch: “resnet”,
n_layers: 18
use_batch_norm: true
all_projections: true
input_image_size: “3,1920,1376”
}
train_config {
train_dataset_path: “/workspace/tlt-experiments/data/split/train”
val_dataset_path: “/workspace/tlt-experiments/data/split/val”
pretrained_model_path: “/workspace/tlt-experiments/classification/output/resnet_pruned/resnet18_nopool_bn_pruned.tlt”
optimizer: “sgd”
batch_size_per_gpu: 32
n_epochs: 80
n_workers: 16

regularizer

reg_config {
type: “L2”
scope: “Conv2D,Dense”
weight_decay: 0.00005
}

learning_rate

lr_config {
scheduler: “step”
learning_rate: 0.006
#soft_start: 0.056
#annealing_points: “0.3, 0.6, 0.8”
#annealing_divider: 10
step_size: 10
gamma: 0.1
}
}
eval_config {
eval_dataset_path: “/workspace/tlt-experiments/data/split/test”
model_path: “/workspace/tlt-experiments/classification/output_retrain/weights/resnet_080.tlt”
top_k: 3
batch_size: 8
n_workers: 8
}

train:

model_config {
arch: “resnet”,
n_layers: 18

Setting these parameters to true to match the template downloaded from NGC.

use_batch_norm: true
all_projections: true
freeze_blocks: 0
freeze_blocks: 1
input_image_size: “3,1920,1376”
}
train_config {
train_dataset_path: “/workspace/tlt-experiments/data/split/train”
val_dataset_path: “/workspace/tlt-experiments/data/split/val”
pretrained_model_path: “/workspace/tlt-experiments/classification/pretrained_resnet18/tlt_pretrained_classification_vresnet18/resnet_18.hdf5”
optimizer: “sgd”
batch_size_per_gpu: 32
n_epochs: 80
n_workers: 16

regularizer

reg_config {
type: “L2”
scope: “Conv2D,Dense”
weight_decay: 0.00005
}

learning_rate

lr_config {
scheduler: “step”
learning_rate: 0.006
#soft_start: 0.056
#annealing_points: “0.3, 0.6, 0.8”
#annealing_divider: 10
step_size: 10
gamma: 0.1
}
}
eval_config {
eval_dataset_path: “/workspace/tlt-experiments/data/split/test”
model_path: “/workspace/tlt-experiments/classification/output/weights/resnet_080.tlt”
top_k: 3
batch_size: 32
n_workers: 8
}

Did you train with default jupyter notebook?

Yes, the jupyter notebook.

I will check it too.
BTW, could you try to workaround according to Error: Transfer learning toolkit for classification failed to setting image size ?
Try to set as below.

use_bias: False

HI behna.rahimi,

Have you tried with the workaround which mentioned at last comments?
Any result can be shared?