Running custom onnx model on Jetson

Hello. I trained my custom model for detecting traffic signs on Ultralytics HUB and downloaded ONNX file. Then I run the command:
detectnet --model=models/traffic/traffic_signs_recognition.onnx --labels=models/traffic/labels.txt from jetson-inference/python/training/detection/ssd folder. I got an error :
3: Cannot find binding of given name: data
failed to find requested input layer data in network
device gpu, failed to create resources for CUDA engine
failed to create TensorRT engine for models/traffic/traffic_signs_recognition.onnx, device GPU
detectnet: failed to load detectNet model

How can I solve this error?

Hi @ilgar, you also need to specify the input and output layer names as shown here:

detectnet expects to have one input layer and two output layers, and it may require modification to the pre/post-processing to support your custom ONNX model if it’s of a different architecture:

The ONNX support in detectNet is setup for SSD-Mobilenet (trained from PyTorch) since that is what my tutorial uses.

Thank you for reply! Now i try to run

detectnet --model=models/traffic/traffic_signs_recognition.onnx --labels=models/traffic/labels.txt --input-blob=images --output-cvg=output --output-bbox=output

but I dont get boxes on image.
I fear that my onnx has only 1 output layer, that is why please check my image on

Yes, it appears to only have one output layer (1x25200x9). If you know how the data of the output layer is interpreted and what it’s dimensions correspond to, you could modify the detectNet code to use it. It should have similar pre- and post-processing as to what is done in the netron app.

Otherwise, if you don’t wish to make these modifications but still wish to use jetson-inference, I recommend converting your dataset to Pascal VOC format and using the PyTorch scripts included with jetson-inference to train an SSD-Mobilenet ONNX model that is already supported by the detectNet code.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.