I use “!tao model yolo_v4_tiny prune -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/yolov4_cspdarknet_tiny_epoch_$EPOCH.hdf5
-e $SPECS_DIR/yolo_v4_tiny_train_kitti.txt
-o $USER_EXPERIMENT_DIR/experiment_dir_pruned/yolov4_cspdarknet_tiny_pruned.hdf5
-eq intersection
-pth 0.1” to prune the model.
Question1:Why does the retraining pruning model become larger after the pruning model size becomes smaller, so does pruning really reduce the number of parameters?
Question2:The pruned model has no loss in accuracy when QAT is not used, but the accuracy drops a lot after QAT is used. Why?
Please make sure load_graph is set to true when you run training with a pruned model.
From the table you shared, we can see the “retrain pruned QAT model” has no accuracy drop. Also, it is needed to run training if a model is pruned. So, we need to run training against “pruned QAT model”.
Where should I add load_graph?
I try to add yolov4_config in yolo_v4_tiny_retrain_kitti.txt but I get error: message type ‘YOLOv4Config’ has no field named ‘load_graph’.
Is there a detailed description of load_graph that I can read?
I don’t see a description of load_graph.
Yes. I’m run the official yolo_v4_tiny.ipynb.
There is not much difference in accuracy and file size. In general, what changes does retraining a pruned model bring?