TypeError: ‘NoneType’ object cannot be interpreted as an integer - pruning - yolov3 model

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) : X86_64 GPU Machine
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : YOLOv3
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here):
training_config.txt (2.0 KB)

• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

Getting error while pruning the trained model,
TypeError: ‘NoneType’ object cannot be interpreted as an integer

training config: refer previously attached

PRUNE COMMAND: tao detectnet_v2 prune -m /home/soundarrajan/yolov3/result/training_model/weights/yolov3_resnet18_epoch_010.tlt -o /home/soundarrajan/yolov3/dataset/yolov3_resnet18_epoch_010_pruned_default.tlt -k tao_encode --log_file /home/soundarrajan/yolov3/logs/pruning_log.txt -v -pth 0.7

pruning log:
pruning_log.txt (34.6 KB)

It seems input tensor size is not set.

KMEANS COMMAND: tao yolo_v3 kmeans -l /home/soundarrajan/yolov3/dataset/transfer_learning/train/labels -i /home/soundarrajan/yolov3/dataset/transfer_learning/train/data -x 1248 -y 384 --log_file /home/soundarrajan/yolov3/logs/transfer_learning/kmeans_algo_log.txt

Please use tao yolo_v3 prune instead of tao detectnet_v2 prune .

Hi @Morganh,

COMMAND USED: tao yolo_v3 prune -m /home/soundarrajan/yolov3/result/training_model/weights/yolov3_resnet18_epoch_010.tlt -o /home/soundarrajan/yolov3/dataset/yolov3_resnet18_epoch_010_pruned_default.tlt -k tao_encode --log_file /home/soundarrajan/yolov3/logs/pruning_log.txt -v -pth 0.7

OUTPUT:

2022-06-09 10:48:17,302 [INFO] root: Registry: ['nvcr.io']
2022-06-09 10:48:17,388 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.22.05-tf1.15.5-py3
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-eq__ifxv because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
Using TensorFlow backend.
usage: yolo_v3 prune [-h] [--num_processes NUM_PROCESSES] [--gpus GPUS]
                     [--gpu_index GPU_INDEX [GPU_INDEX ...]] [--use_amp]
                     [--log_file LOG_FILE] -m MODEL -o OUTPUT_FILE -e
                     EXPERIMENT_SPEC_PATH -k KEY [-n NORMALIZER]
                     [-eq EQUALIZATION_CRITERION] [-pg PRUNING_GRANULARITY]
                     [-pth PRUNING_THRESHOLD] [-nf MIN_NUM_FILTERS]
                     [-el [EXCLUDED_LAYERS [EXCLUDED_LAYERS ...]]]
                     [--results_dir RESULTS_DIR] [-v]
                     {dataset_convert,evaluate,export,inference,kmeans,prune,train}
                     ...
yolo_v3 prune: error: the following arguments are required: -e/--experiment_spec_path

Asking for specification file, but pruning does require spec file?
it is not mentioned here… YOLOv3 — TAO Toolkit 3.22.05 documentation

Please add spec file and retry.

Hi @Morganh,

It worked but anyway spec file is not mentioned in the documentation provided!
https://docs.nvidia.com/tao/tao-toolkit/text/object_detection/yolo_v3.html#pruning-the-model

Thanks for the info. We will improve the doc.