Why the preprocessing for the Inception_v1 model used in Coral devboard gives different result when used in the Jetson TX2?

Hi

I am using the Coral devboard (INT8) and the Jetson TX2(FP16) to test inference with the Inception_v1 model and an image of a Trimaran.

In Coral devboard, the preprocessing (this came with the devboard) is:

image = Image.open(args.input).convert(‘RGB’).resize(size, Image.ANTIALIAS)

When I apply it in the Jetson I change the dtype to FP32. Like this:

image = np.asarray(Image.open(image_path).convert('RGB').resize(size, Image.ANTIALIAS), dtype=np.float32)

The prediction that I get when using this preprocessing in the Jetson, is:

maximum value 0.9786956 : Oystercatcher

Meanwhile in Jetson is (check this GIT) https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification/blob/master/scripts/model_meta.py:

def preprocess_inception(image):
           return 2.0 * (np.array(image, dtype=np.float32) / 255.0 - 0.5)

image = np.asarray(Image.open(image_path).resize(size))

The prediction that I get is:
maximum value 0.6656681 : trimaran

What is happening? It is the same model: Inception_v1.
I am not sure why each devboard has their own preprocessing for the same model.
My goal is to benchmark both devboards. So ideally I should be using the same preprocessing. But now I am not sure.

Okey, I made a mistake.
In Coral, what was missing was:
image/=255

So now I get the same prediction but using the preprocessing of Nvidia gives me higher score.

  • Coral’s = 64.68%
  • Nvidia’s = 66.56%