Hello. I trained my custom model for detecting traffic signs on Ultralytics HUB and downloaded ONNX file. Then I run the command:
detectnet --model=models/traffic/traffic_signs_recognition.onnx --labels=models/traffic/labels.txt from jetson-inference/python/training/detection/ssd folder. I got an error :
3: Cannot find binding of given name: data
failed to find requested input layer data in network
device gpu, failed to create resources for CUDA engine
failed to create TensorRT engine for models/traffic/traffic_signs_recognition.onnx, device GPU
detectnet: failed to load detectNet model
detectnet expects to have one input layer and two output layers, and it may require modification to the pre/post-processing to support your custom ONNX model if it’s of a different architecture:
Yes, it appears to only have one output layer (1x25200x9). If you know how the data of the output layer is interpreted and what it’s dimensions correspond to, you could modify the detectNet code to use it. It should have similar pre- and post-processing as to what is done in the netron app.
Otherwise, if you don’t wish to make these modifications but still wish to use jetson-inference, I recommend converting your dataset to Pascal VOC format and using the PyTorch scripts included with jetson-inference to train an SSD-Mobilenet ONNX model that is already supported by the detectNet code.