[TRT] imageNet -- failed to initialize

Hello all,

I have trained a custom model using the scripts from https://github.com/dusty-nv/jetson-inference on a cloud service following this remark of Dusty. I then converted my .pth model to .onnx model and copy it to jetson nano.

However when I launch the command in the folder /jetson-inference/python/training/detection/ssd,

$ NET=models/ob
$ DATASET=data/ob
$ imagenet.py --model=$NET/ssd-mobilenet.onnx --input_blob=input_0 --output_blob=output_0 --labels=$DATASET/labels.txt csi://0

I get the following error :

[TRT] 3: Cannot find binding of given name: output_0
[TRT] failed to find requested output layer output_0 in network
[TRT] device GPU, failed to create resources for CUDA engine
[TRT] failed to create TensorRT engine for models/ob/ssd-mobilenet.onnx, device GPU
[TRT] failed to load models/ob/ssd-mobilenet.onnx
[TRT] imageNet – failed to initialize.

Any idea how I can resolve this? Is it related to the fact that I trainded the model on a cloud service and perhaps they do not have ‘discrete’ GPU as in the above-mentioned post of Dusty.

Thank you for your consideration!

Never mind!! I should have run detectnet and not imagenet! Now it works fine!!

Hi @SuShi163, the command you are running is for image classification model, but you are loading an object detection model (SSD-Mobilenet). Do you want to do classification or detection? If the later, try this command instead:

detectnet.py --model=$NET/ssd-mobilenet.onnx --labels=$NET/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes csi://0

EDIT: just saw your comment above - glad you got it working! :)

Yes, this is exactly what I had to do! Thank you @dusty_nv for getting back and also for jetson-inference. Your scripts work like a charm!!