Issue When Running Re-trained SSD Mobilenet Model in Script

Hello Nvidia,

I followed the ‘Re-training SSD Mobilenet’ tutorial on GitHub by Dusty-nv for the Jetson Nano on Jetpack 4.4 and I have encountered a big issue.

After successfully retraining my model using 5 classes and converting it to an onxx model file, running it on the terminal with this code, it works with no problem. it even provides up to 40fps at times.

detectnet --model=models/adas/ssd-mobilenet.onnx --labels=models/adas/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes

However, I tried following some advice provided by Dusty-nv from this forum ONNX model with Jetson-Inference using GPU - #4 by Pelepicier in reference to adding the code to the script in python. Whereby, I could simply run my model in the script by adding the following code in my script.

net = jetson.inference.detectNet(argv=['--model=models/adas/ssd-mobilenet.onnx', '--labels=models/adas/labels.txt', '--input-blob=input_0', '--output-cvg=scores', '--output-bbox=boxes'])

Unfortunately, I received the following error and have absolutely no idea how to debug it. I have no issues when running the other detectnet models except for this. However, It has come to my attention that this error has frequently occurred several times when running a retrained model on the Jetson Nano. Although I am not facing any issues now, but it has happened in the past. I am sincerely looking for help and I hope you can advice me on how I can solve this issue. Thanks in advance.

Output when running script in Terminal (via Code-OSS):

jetson.inference -- detectNet loading network using argv command line params

detectNet -- loading detection network model from:
-- prototxt NULL
-- model models/adas/sss-mobilenet.onnx
-- input_blob 'data'
-- output_cvg 'scores'
-- output_bbox 'boxes'
-- mean_pixel 0.000000
-- mean_binary NULL
-- class_labels models/adas/labels.txt
-- threshold 0.500000
-- batch_size 1

[TRT] TensorRT version 7.1.3
[TRT] loading NVIDIA plugins...
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - ONNX (extension '.onnx')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file .1.1.7103.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU

error: model file 'models/adas/sss-mobilenet.onnx' was not found.
if loading a built-in model, maybe it wasn't downloaded before.

```
    Run the Model Downloader tool again and select it for download:

       $ cd <jetson-inference>/tools
       $ ./download-models.sh
```

[TRT] detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load network
Traceback (most recent call last):
File "/usr/local/bin/detectnet.py", line 51, in
net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)
Exception: jetson.inference -- detectNet failed to load network

Hi,

The log indicates that the onnx file doesn’t exist.
Please double check if the model is correctly placed.

Thanks.