Hello,
I used a ResNet50 on KubeFlow to train my model and I got good results for 2 classes (True, False).
Then I converted my model to the ONNX form.
can you test my Model please model.onnx (97.7 MB)
I use this command to test the model in a live video, but I only get one class, for example: 100% true or 100% false.
$ imagenet --model=models/Yazan/model.onnx --input_blob=tensor --output_blob=ret.15 --labels=data/Yazan/labels.txt csi://0
What must be the problem, should I perhaps resize the images to 224x224? and how can i do that for live video when I run the model??
Can you please help me.
Thanks very much
Hi @AastaLLL,
we already did model inferencing in Kubeflow and it worked there.
Out of 128 images, 122 were correctly classified and the rest misclassified.
I would appreciate if you have a resource or an example how to do it??
So you might need to alter those pre-processing coefficients to mirror what KubeFlow uses if they are different. jetson-inference will automatically downsample the input stream to the resolution and channel layout that the model expects (e.g. 224x224 NCHW)