Custom trained model detectNet jetson_inference

Hello, I have trained a custom object detection model using this guide for my Jetson Nano 2GB jetson-inference/pytorch-ssd.md at master · dusty-nv/jetson-inference · GitHub

The problem is that I cannot load the custom trained model in my python script. Here is something that I tried:

  import jetson.inference
  import jetson.utils
  import numpy as np
  import cv2
  import time

 timeStamp = time.time()
 fpsFilt=0

 net = jetson.inference.detectNet('ssd',['--model=models/fruit/ssd-mobilenet.onnx -- 
                                      labels=models/fruit/labels.txt \
                              --input-blob=input_0 --output-cvg=scores --output-bbox=boxes'])

I get the following error in terminal:

      [TRT]    model format 'custom' not supported by jetson-inference
      [TRT]    detectNet -- failed to initialize.
      jetson.inference -- detectNet failed to load network

The error leads me to believe that we cannot load custom trained models this way, is there a way to fix this problem, so that I can load custom trained model to my script. Or any workaround that people in the community have been using? Thanks for reading, any kind of help is welcome.

1 Like

Hi,

The format type indicates the file format rather than the model architecture.
Based on your use case, the format is ‘onnx’ and do within the support range.

To give a further suggestion, could you help to check the model_fmt value in your environment?

Thanks.

Hi @nitinkumar96, try it like this instead:

 net = jetson.inference.detectNet(argv=['--model=models/fruit/ssd-mobilenet.onnx', '--labels=models/fruit/labels.txt', '--input-blob=input_0', '--output-cvg=scores', '--output-bbox=boxes'])

I have a directory with my Python code and a shell script. It also contains the directory Cones-and-Cells which contains my labels.txt and ssd*.onnx files.

The shell script works fine. It contains:

detectnet /dev/video3 --model=Cones-and-Cells/ssd-mobilenet.onnx --labels=Cones-and-Cells/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes

My Python code includes (per your suggestion):

net = jetson.inference.detectNet(argv=['--model=Cones-and-Cells/ssd-mobilenet.onnx --input-blob=input_0 --output-cvg=scores --output-bbox=boxes --labels=Cones-and-Cells/labels.txt'])

However, that line fails with:

[TRT]    detected model format - custom  (extension '.txt')
[TRT]    model format 'custom' not supported by jetson-inference
[TRT]    detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load network

This makes no sense to me. Why is it trying to infer the model format from the label file? What am I doing wrong?

Thanks in advance for any enlightenment!

Michael

Oddly, when I change the order of parameters to this:

net = jetson.inference.detectNet(argv=['--model=Cones-and-Cells/ssd-mobilenet.onnx --labels=Cones-and-Cells/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes'])

The message changes to this:

[TRT] detected model format - custom (extension '.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes')

It’s like the parser didn’t recognize any of the spaces in the string.

Found it! @dusty_nv: your earlier reply was a little off. You were missing the quotes to make argv an array of strings.

net = jetson.inference.detectNet(argv=['--model=models/fruit/ssd-mobilenet.onnx', '--labels=models/fruit/labels.txt', ' --input-blob=input_0', '--output-cvg=scores', '--output-bbox=boxes'])

1 Like

Aha, sorry - right. Glad you got it working. Correct that in the post above.