I am able to install tensorflow on Jetson Tx2 following the steps of (http://www.jetsonhacks.com/2017/09/14/build-tensorflow-on-nvidia-jetson-tx2-development-kit/ ). But what I found is that first run of tensorflow is quite slow as compared to the other successive runs. I am aware of the fact that if tensorflow is not built with proper cuda compute capability the PTX binaries will be JIT compiled at run time and this thing takes time. However, I am building tensorflow with proper cuda compute capability (6.2) still it is JIT compiling the code. I am puzzled!
There are two suggestions for improving the TensorFlow performance on TX2:
1. Maximize device performance
sudo nvpmodel -m 0 sudo ~/jetson_clocks.sh
2. Use our TensorRT inference engine: