Unet pruned model Inference error

Hi.
I pruned a Unet model but when running inference task, I get the following error:

Phase test: Total 25 files.
Traceback (most recent call last):
File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/inference.py", line 412, in <module>
File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/inference.py", line 408, in main
File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/inference.py", line 318, in run_experiment
File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/inference.py", line 278, in infer_unet
File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/inference.py", line 174, in run_inference_tlt
File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/model/model_io.py", line 51, in _extract_ckpt
IndexError: list index out of range
Using TensorFlow backend.

Can you share the command and the spec file?

Hi.
This is my spec file:

random_seed: 42
model_config {
  model_input_width: 880
  model_input_height: 880
  model_input_channels: 3
  num_layers: 101
  all_projections: true
  arch: "resnet"
  freeze_blocks: 0
  freeze_blocks: 1
  use_batch_norm: True
  training_precision {
    backend_floatx: FLOAT32
  }
}

training_config {
  batch_size: 2
  epochs: 300
  log_summary_steps: 499
  checkpoint_interval: 5
  loss: "cross_dice_sum"
  learning_rate:0.0002
  regularizer {
    type: L2
    weight: 3e-09
  }
  optimizer {
    adam {
      epsilon: 9.99999993923e-09
      beta1: 0.899999976158
      beta2: 0.999000012875
    }
  }
}
dataset_config {
  dataset: "custom"
  augment: False
  augmentation_config {
    spatial_augmentation {
    hflip_probability : 0.5
    vflip_probability : 0.5
    crop_and_resize_prob : 0.5
    }
    brightness_augmentation {
      delta: 0.2
    }
  }
  input_image_type: "color"
  train_images_path: "/workspace/tlt/results/unet/unpruned/corrosion_1000_temp/train/images/"
  train_masks_path: "/workspace/tlt/results/unet/unpruned/corrosion_1000_temp/train/masks"

  val_images_path: "/workspace/tlt/results/unet/unpruned/corrosion_1000_temp/val/images"
  val_masks_path: "/workspace/tlt/results/unet/unpruned/corrosion_1000_temp/val/masks"

  test_images_path:"/workspace/tlt/results/unet/corrosion_1000/images/test/"

  data_class_config {
    target_classes {
      name: "background"
      mapping_class: "background"
      label_id: 0
    }
    target_classes {
      name: "foreground"
      mapping_class: "foreground"
      label_id: 255
    }  
  }
}

this is my command:

unet inference --gpu_index 0 \
                         -e /workspace/tlt/results/unet/prune/final_spec.txt \
                         -m /workspace/tlt/results/unet/prune/final_model.tlt\
                         -o /workspace/tlt/results/unet/prune/inference/ \
                         -k $KEY

Can you modify above to
test_images_path:"/workspace/tlt/results/unet/corrosion_1000/images/test"

I tried but got the same result.
I think the pruned model does not load properly.

Can you run inference with the unpruned tlt model?

Yes. I can.
I am using TAO. The error in TLT is different

I will check further. Just confirm your result, please correct me if any.

  1. Run inference with unpruned tlt model : successful
  2. Run inference with pruned tlt model : fail

Yes, that is quite right.
Thank you

Currently, UNet does not support Inference/ Evaluation for pruned model. It only supports the evaluation/ inference for re-trained pruned model. Please do the re-training and then do evaluation.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.