Hi, I am training a SSD model with in the TAO docker nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.08-py3
I run the model training, with in the docker, like this:
ssd train --gpus 1 --gpu_index=0 -e specs/ssd_train_resnet18_kitti.txt -r output/unpruned -k mykey
my specs file is attached. ssd_train_resnet18_kitti.txt (1.4 KB)
The SSD training saves the model weight after each epoch, and with each file ~100MB then the space required becomes large quite quickly. Is there any way to change that?
I would prefer to only keep the best model weights based on the mAP validation metric.