Wrong inference result on the TX2

Hi,

I trained a mobilenet-based segmentation model using google colab then deployed it on the Jetson TX2. Then I run a test video using the model on the TX2 and a laptop (GTX 960M). Unfortunately the result I got from the TX2 is much worse than from the laptop. I tried switching off GPU and run the model using CPU only but the result is still the same

This is my laptop’s result, which it’s supposed to be:
https://uphinhnhanh.com/images/2019/02/18/imageacfc17b8d1e27261.png

And this is the TX2’s result:
https://uphinhnhanh.com/images/2019/02/18/Capture112a9a99f4ee7edf.jpg

Here is my mobilenet architecture:
https://uphinhnhanh.com/images/2019/02/18/image975a414b2f16c5d1.png

My TX2 configuration: Ubuntu 16.04, Jetpack 3.3, Tensorflow-gpu 1.10, CUDA 9.0, cudnn 7.1.5

Please help me to solve this problem! Thank you!

Hi,

Do you use the same implementation for Jetson and desktop?
If yes, could you share how do you read the input video and do the pre-process with us?

Thanks.

I think I have found the reason. My laptop was running on tensorflow 1.11.0. When I downgrade to tensorflow 1.10.0, which matches the version on the TX2, I get exactly the same result as the TX2. So this is not something TX2-related but tensorflow-related instead.

However, if you have any idea to solve this problem beside retraining the model using tensorflow 1.10.0, I would be appreciated!

For your question, I don’t do much pre-processing except grayscale conversion. Here is my full code: https://codeshare.io/5wQLjK

One more question: Is the tensorflow 1.11.0 in this topic https://devtalk.nvidia.com/default/topic/1031300/jetson-tx2/tensorflow-1-11-0-wheel-with-jetpack-3-3/ a CPU version?

Thank you!

Solved. I upgrade to tensorflow 1.11.0 and everything works perfectly

Hi,

Thanks for your feedback.
Guess this issue is from the different implementation of TensorFlow between v1.10 and 1.11.

Good to know it works for you now.