Hello, everyone. I compared the inference time on Jetson Xavier and local host PC. I found infered a 28 pixels*28 pixels grayscale MNIST image on my local host PC only takes 0.08 ms, while on the Jetson Xavier needs 0.6 ms. I am confused about the comparison results.
1.In my opinion, the inference on the Jetson Xavier must be faster than the host PC. But it shows the otherwise results.
2.The Nvidia only gives a simple example on model training on DIGITS(lenet5) with TensorFlow framework, but it didn’t give detailed procedures on how to make inference with the trained model(form DIGITS) on Jetson Xavier.
3.I have tried to train a lenet5 model, convert it to lenet5.pb, then convert it to lenet5.uff, when make inference on the Jetson Xavier, it failed.
Does anyone have the same questions?
If you have the solutions of the above questions, I would be very appreciated for you kind response.