ONNX Model

Hello,
I used a ResNet50 on KubeFlow to train my model and I got good results for 2 classes (True, False).
Then I converted my model to the ONNX form.
can you test my Model please
model.onnx (97.7 MB)

I use this command to test the model in a live video, but I only get one class, for example: 100% true or 100% false.
$ imagenet --model=models/Yazan/model.onnx --input_blob=tensor --output_blob=ret.15 --labels=data/Yazan/labels.txt csi://0

What must be the problem, should I perhaps resize the images to 224x224? and how can i do that for live video when I run the model??
Can you please help me.
Thanks very much

Hi,

Could you verify if the ONNX conversion works well first?
This can be done via running inference with ONNXRuntime.

Thanks.

Hi @AastaLLL,
we already did model inferencing in Kubeflow and it worked there.
Out of 128 images, 122 were correctly classified and the rest misclassified.
I would appreciate if you have a resource or an example how to do it??

Thanks.

Hi @yazan.doha, the ResNet classification ONNX models used in jetson-inference were trained with PyTorch in this part of the Hello AI World tutorial: https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-collect.md

As such, the pre-processing coefficients used by imageNet for mean-pixel subtraction and pixel standardization/normalization are the same coefficients used by PyTorch: https://github.com/dusty-nv/jetson-inference/blob/b72ff405cd908141a83993549c30addb23b48c52/c/imageNet.cpp#L265

So you might need to alter those pre-processing coefficients to mirror what KubeFlow uses if they are different. jetson-inference will automatically downsample the input stream to the resolution and channel layout that the model expects (e.g. 224x224 NCHW)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.