Hello all,
I have trained a custom model using the scripts from https://github.com/dusty-nv/jetson-inference on a cloud service following this remark of Dusty. I then converted my .pth model to .onnx model and copy it to jetson nano.
However when I launch the command in the folder /jetson-inference/python/training/detection/ssd,
$ NET=models/ob
$ DATASET=data/ob
$ imagenet.py --model=$NET/ssd-mobilenet.onnx --input_blob=input_0 --output_blob=output_0 --labels=$DATASET/labels.txt csi://0
I get the following error :
[TRT] 3: Cannot find binding of given name: output_0
[TRT] failed to find requested output layer output_0 in network
[TRT] device GPU, failed to create resources for CUDA engine
[TRT] failed to create TensorRT engine for models/ob/ssd-mobilenet.onnx, device GPU
[TRT] failed to load models/ob/ssd-mobilenet.onnx
[TRT] imageNet – failed to initialize.
Any idea how I can resolve this? Is it related to the fact that I trainded the model on a cloud service and perhaps they do not have ‘discrete’ GPU as in the above-mentioned post of Dusty.
Thank you for your consideration!