Hello, I have trained a custom object detection model using this guide for my Jetson Nano 2GB jetson-inference/pytorch-ssd.md at master · dusty-nv/jetson-inference · GitHub
The problem is that I cannot load the custom trained model in my python script. Here is something that I tried:
import jetson.inference import jetson.utils import numpy as np import cv2 import time timeStamp = time.time() fpsFilt=0 net = jetson.inference.detectNet('ssd',['--model=models/fruit/ssd-mobilenet.onnx -- labels=models/fruit/labels.txt \ --input-blob=input_0 --output-cvg=scores --output-bbox=boxes'])
I get the following error in terminal:
[TRT] model format 'custom' not supported by jetson-inference [TRT] detectNet -- failed to initialize. jetson.inference -- detectNet failed to load network
The error leads me to believe that we cannot load custom trained models this way, is there a way to fix this problem, so that I can load custom trained model to my script. Or any workaround that people in the community have been using? Thanks for reading, any kind of help is welcome.