In jetson nano,using my own mobilenetv2 model

In jetson nano,using my own mobilenetv2 model.
I retrain a mobilenetv2 model.And I run it with detectnet order.“detectnet --model=model/mb2-ssd-lite.onnx --labels=models/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes --csi://0”
And it gets a good result.Then I want to use it in my own code.
I use this code to use it.
“model_trt_detection = create_mobilenetv2_ssd_lite(num_classes=7)
model_trt_detection.load_state_dict(torch.load(‘mb2_ssd_lite.pth’))”
this is my code.And when I output inference result,it has a large confidence in the zero category.it is background.
this is my print output.“model_trt_classification0: torch.Size([1, 1602, 7])
model_trt_classification: tensor([11.8281, -1.1484, -1.3271, -1.6455, -3.1895, -2.2070, -1.9297],
device=‘cuda:0’, dtype=torch.float16, grad_fn=)
model_trt_classification1: torch.Size([1, 1602, 4])”
3_roadfollowing_classification_behavior.ipynb (404.3 KB)

Hi,

Except for the confidence value, do you get the expected bounding box output?
Thanks.

I have a question.why the final output is [[1,1602,7][1,1602,4]].
I know 1 is batch-size.7 is my predict value.4 is bounding box output.But why do it have 1602?

Hi,

Could you run your model with trtexec to see the output dimension from TensorRT first?
For example:

$ /usr/src/tensorrt/bin/trtexec --onnx=[onnx] --dumpOutput
$ /usr/src/tensorrt/bin/trtexec --onnx=yolov3-tiny-416-bs1.onnx --dumpOutput
...
[09/17/2021-14:32:46] [I] Output Tensors:
[09/17/2021-14:32:46] [I] 023_convolutional: (1x255x26x26)
...
[09/17/2021-14:32:46] [I] 016_convolutional: (1x255x13x13)
...

Thanks.