Mobilenetv2 classification_tf1 pruning, error at pruned layer

I’m using a mobilenetv2 based tltb model and trying to generate a Pruned etlt model.
When I try to run a prune command on it I get the following error.

Attaching my command here:
!tao classification_tf1 prune -bm /workspace/tao-experiments/classification_tf1/byom_voc/output-tltb-coco-norm-pytorch/output-tltb-coco-norm-pytorch.tltb
-m /workspace/tao-experiments/classification_tf1/byom_voc/output-byom-mobv2-pytorch-trained-noprune/weights/byom_001.tlt
-o /workspace/tao-experiments/classification_tf1/byom_voc/output-byom-mobv2-pytorch-trained-prune/output-tltb-coco-norm-pruned.etlt
-k nvidia_tlt
-pth 0.5
–results_dir /workspace/tao-experiments/classification_tf1/byom_voc/output-byom-mobv2-pytorch-trained-prune/

I have done the following steps:

  1. Generated tltb from onnx model using tao_byom.
  2. Retrained tltb model to generate tlt model checkpoint.
  3. Run the above provided command for pruning.
    Attaching dataset sample: (4.2 MB)
    Onnx Model:
    mobilenet_pretr_v2_person_ep0.onnx (8.4 MB)
    Tltb model that I generated:
    output-tltb-coco-norm-pytorch.tltb (7.9 MB)

• Hardware 3090
• Network Type Mobilenetv2
• TLT Version nvidia-taov4.0.0

Please generate a new .tltb model via following way.
tao_byom -m mobilenet_pretr_v2_person_ep0.onnx -r results/mobilenet -n person -k nvidia_tlt -p onnx::GlobalAveragePool_533

You need to specify the ONNX node that corresponds to the penultimate layer.
See more info in BYOM Converter
tao_byom_examples/classification at main · NVIDIA-AI-IOT/tao_byom_examples · GitHub

I can run below pruning command successfully.
tao classification_tf1 prune -bm results/mobilenet/person.tltb -m output/weights/byom_002.tlt -o pruned.tlt -eq union -pth 0.68 -k nvidia_tlt

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.