Since this seems to be a common request, I’m making my Jetson Nano binary files of Tensorflow 1.13.1 for the C API public. You can download them from here:
18B Aug 19 01:26 libtensorflow.so -> libtensorflow.so.1
23B Aug 19 01:26 libtensorflow.so.1 -> libtensorflow.so.1.13.1
303M Aug 14 13:07 libtensorflow.so.1.13.1
28B Aug 19 01:25 libtensorflow_framework.so -> libtensorflow_framework.so.1
33B Aug 19 01:24 libtensorflow_framework.so.1 -> libtensorflow_framework.so.1.13.1
19M Aug 14 13:07 libtensorflow_framework.so.1.13.1
After lots of unsuccessful cross-compilation attempts, I have resorted to builing the binaries directly on the Jetson Nano, similar to what was described at https://devtalk.nvidia.com/default/topic/1055131/jetson-agx-xavier/building-tensorflow-1-13-on-jetson-xavier/.
The key to success was:
- Switching the compiler to gcc 5.5.0 - neither gcc 7.3.0 nor gcc 6.5.0 can compile Tensorflow 1.13.1 due to internal gcc errors (https://github.com/tensorflow/tensorflow/issues/25323)
sudo update-alternatives --remove-all gcc
sudo update-alternatives --remove-all g++
sudo apt-get install gcc-5 g++-5 gcc-7 g++-7
#set comiler version to gcc-5 and g++-5
sudo update-alternatives --config gcc
sudo update-alternatives --config g++
- Attaching an external SSD to the Nano - I initially tried to empty out a 16 GB SD card running the Jetson Nano image but the Tensorflow build process takes way more space than you can free up on the SD card.
sudo mount /dev/sda1 /data/
- Setting up swap memory on the external SSD
I added 8 GB swap space, moved all Tensorflow source and bazel output to external drive mounted to /data
sudo dd if=/dev/zero of=/data/swapfile bs=1024 count=8388608
sudo swapon /data/swapfile
- Forcing bazel to write it’s output to the external SSD, using
- Using this Tensorflow build configuration
sudo ln -s /usr/local/cuda-10.0 /usr/local/cuda
Please specify the location of python. [Default is /usr/bin/python]:
Found possible Python library paths:
Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages]
Do you wish to build TensorFlow with XLA JIT support? [Y/n]:
XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]:
No OpenCL SYCL support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]:
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: Y
CUDA support will be enabled for TensorFlow.
Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]:
Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]:
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Do you wish to build TensorFlow with TensorRT support? [y/N]:
No TensorRT support will be enabled for TensorFlow.
Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]: 5.3
Do you want to use clang as CUDA compiler? [y/N]:
nvcc will be used as CUDA compiler.
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Do you wish to build TensorFlow with MPI support? [y/N]:
No MPI support will be enabled for TensorFlow.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]:
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:
Not configuring the WORKSPACE for Android builds.
- And finally compile, excluding support for AWS (breaks the build) and NCCL (breaks and not needed).
bazel build --config opt --config=noaws --config=nonccl --jobs 4 --ram_utilization_factor 50 --verbose_failures //tensorflow/tools/lib_package:libtensorflow
Thanks to compiling from SSD, the build goes through relatively quickly (a few hours).
The direct download link to the binaries is https://github.com/jens-totemic/tensorflow/releases/download/v1.13.1/tensorflow-1.13.1-nano-gpu.tar.xz