Evaluation of ssd mobile net

Hello everyone!

I re-train SSD using this tutorial jetson-inference/pytorch-ssd.md at master · dusty-nv/jetson-inference · GitHub .

When I ran the following: python3 eval_ssd.py --net=mb1-ssd --dataset_type=open_images --model=models/street/mb1-ssd-Epoch-2-Loss-3.924812481620095.pth --dataset=data/street/ --label_file=models/street/labels.txt

I got this messages:
root@orinnx:/jetson-inference/python/training/detection/ssd# python3 eval_ssd.py --net=mb1-ssd --dataset_type=open_images --model=models/street/mb1-ssd-Epoch-2-Loss-3.924812481620095.pth --dataset=data/street/ --label_file=models/street/labels.txt
2024-12-24 01:34:49 - loading annotations from: data/street/sub-test-annotations-bbox.csv
2024-12-24 01:34:49 - annotations loaded from: data/street/sub-test-annotations-bbox.csv
num images: 220
2024-12-24 01:34:52 - loading model models/street/mb1-ssd-Epoch-2-Loss-3.924812481620095.pth
Traceback (most recent call last):
File “eval_ssd.py”, line 234, in
net.load(args.model)
File “/jetson-inference/python/training/detection/ssd/vision/ssd/ssd.py”, line 149, in load
self.load_state_dict(torch.load(model, map_location=lambda storage, loc: storage))
File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 2041, in load_state_dict
raise RuntimeError(‘Error(s) in loading state_dict for {}:\n\t{}’.format(
RuntimeError: Error(s) in loading state_dict for SSD:
size mismatch for classification_headers.0.weight: copying a param with shape torch.Size([48, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([42, 512, 3, 3]).
size mismatch for classification_headers.0.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([42]).
size mismatch for classification_headers.1.weight: copying a param with shape torch.Size([48, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([42, 1024, 3, 3]).
size mismatch for classification_headers.1.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([42]).
size mismatch for classification_headers.2.weight: copying a param with shape torch.Size([48, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([42, 512, 3, 3]).
size mismatch for classification_headers.2.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([42]).
size mismatch for classification_headers.3.weight: copying a param with shape torch.Size([48, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([42, 256, 3, 3]).
size mismatch for classification_headers.3.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([42]).
size mismatch for classification_headers.4.weight: copying a param with shape torch.Size([48, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([42, 256, 3, 3]).
size mismatch for classification_headers.4.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([42]).
size mismatch for classification_headers.5.weight: copying a param with shape torch.Size([48, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([42, 256, 3, 3]).
size mismatch for classification_headers.5.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([42]).

Sorry for the late response.
Is this still an issue to support? Any result can be shared?