Wrong inference result on the TX2


I trained a mobilenet-based segmentation model using google colab then deployed it on the Jetson TX2. Then I run a test video using the model on the TX2 and a laptop (GTX 960M). Unfortunately the result I got from the TX2 is much worse than from the laptop. I tried switching off GPU and run the model using CPU only but the result is still the same

This is my laptop’s result, which it’s supposed to be:

And this is the TX2’s result:

Here is my mobilenet architecture:

My TX2 configuration: Ubuntu 16.04, Jetpack 3.3, Tensorflow-gpu 1.10, CUDA 9.0, cudnn 7.1.5

Please help me to solve this problem! Thank you!


Do you use the same implementation for Jetson and desktop?
If yes, could you share how do you read the input video and do the pre-process with us?


I think I have found the reason. My laptop was running on tensorflow 1.11.0. When I downgrade to tensorflow 1.10.0, which matches the version on the TX2, I get exactly the same result as the TX2. So this is not something TX2-related but tensorflow-related instead.

However, if you have any idea to solve this problem beside retraining the model using tensorflow 1.10.0, I would be appreciated!

For your question, I don’t do much pre-processing except grayscale conversion. Here is my full code: https://codeshare.io/5wQLjK

One more question: Is the tensorflow 1.11.0 in this topic https://devtalk.nvidia.com/default/topic/1031300/jetson-tx2/tensorflow-1-11-0-wheel-with-jetpack-3-3/ a CPU version?

Thank you!

Solved. I upgrade to tensorflow 1.11.0 and everything works perfectly


Thanks for your feedback.
Guess this issue is from the different implementation of TensorFlow between v1.10 and 1.11.

Good to know it works for you now.