The problem is that I cannot load the custom trained model in my python script. Here is something that I tried:
import jetson.inference
import jetson.utils
import numpy as np
import cv2
import time
timeStamp = time.time()
fpsFilt=0
net = jetson.inference.detectNet('ssd',['--model=models/fruit/ssd-mobilenet.onnx --
labels=models/fruit/labels.txt \
--input-blob=input_0 --output-cvg=scores --output-bbox=boxes'])
I get the following error in terminal:
[TRT] model format 'custom' not supported by jetson-inference
[TRT] detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load network
The error leads me to believe that we cannot load custom trained models this way, is there a way to fix this problem, so that I can load custom trained model to my script. Or any workaround that people in the community have been using? Thanks for reading, any kind of help is welcome.
The format type indicates the file format rather than the model architecture.
Based on your use case, the format is ‘onnx’ and do within the support range.
To give a further suggestion, could you help to check the model_fmt value in your environment?
I have a directory with my Python code and a shell script. It also contains the directory Cones-and-Cells which contains my labels.txt and ssd*.onnx files.
net = jetson.inference.detectNet(argv=['--model=Cones-and-Cells/ssd-mobilenet.onnx --input-blob=input_0 --output-cvg=scores --output-bbox=boxes --labels=Cones-and-Cells/labels.txt'])
However, that line fails with:
[TRT] detected model format - custom (extension '.txt')
[TRT] model format 'custom' not supported by jetson-inference
[TRT] detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load network
This makes no sense to me. Why is it trying to infer the model format from the label file? What am I doing wrong?