Smaller model issues

Please provide the following information when requesting support.

• Hardware NVIDIA A5000x8
• Network Type (Detectnet_v2)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) format_version: 2.0 / toolkit_version: 3.22.05 / v3.21.11-py3
detectnet_v2_train_resnet18_kitti.txt (7.0 KB)

• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I trained a DetectNetV2 ResNet18, it has 43 MB. I pruned the model, the result was a ratio ~15% and it has 7MB.
Retrained the model for 10 epoch (to test it) and the retrained model has 43 MB and a weak accuracy. Doesn’t really matter, the size is the same, but the accuracy is bad for 10 epoch, but the size is the same.

batch_size_per_gpu: 4
num_epochs: 140 (10 for retrain)
learning_rate {
soft_start_annealing_schedule {
min_learning_rate: 5e-06
max_learning_rate: 5e-04
soft_start: 0.10000000149
annealing: 0.699999988079

Shouldn’t the model be smaller or I am doing something wrong? How can I make the model smaller?

Could you please share the spec file when you run retraining?

Both of them are below.

detectnet_v2_train_resnet18_kitti.txt (4.7 KB)
detectnet_v2_retrain_resnet18_kitti.txt (4.7 KB)

Refer to DetectNet_v2 - NVIDIA Docs

Please set load_graph:true .

A flag to determine whether or not to load the graph from the pretrained model file, or just the weights. For a pruned model, set this parameter to True. Pruning modifies the original graph, so the pruned model graph and the weights need to be imported.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.