Hi!
I am looking for an example preferably in Python for deploying a custom ResNet model from Tensorflow to the Jetson Xavier. My model has already been converted to the .onnx format for the Xavier. I have seen the Jetson Inference API example in Python for loading a model using the ImageNet() command in Python: https://github.com/dusty-nv/jetson-inference/blob/master/python/examples/my-recognition.py
Aside from having the .onnx file what else is required to load my custom ResNet model on the Xavier?
I also tried loading my custom model using the image-console binary but saw an error.
This was the command I used: ./imagenet-console --model=k2onnx_exp52_resnet50.onnx --input_blob=input_0 --output_blob=output_0 --labels=defect_labels.txt S9370.jpg_defect_0.png
Screenshots:
Console command: https://imgur.com/1BaiZPj
Error: https://imgur.com/dn3Zahv
Is there any example code demonstrating the link above (for Python Jetson Inference) but using a custom ResNet model?
I have searched the forums and web without any luck for this specific example of using a custom model with the Jetson Inference API.
Thanks for any help! :)