Request for prebuilt TensorFlow C/C++ API libs for Jetson Nano

Hi, I’m working on a project where we need to run TensorFlow inference on a Jetson Nano, using the C API. We were able to run the inference on a x86_64 host with no issues using the prebuilt binaries provided by TensorFlow ([url]https://www.tensorflow.org/install/lang_c[/url]), namely libtensorflow.so and libtensorflow_framework.so.

We want to do the same on the Jetson Nano but couldn’t find prebuilt binaries for aarch64 online.

We also manually tried to compile the C API libraries with Bazel with cross-compilation and on the Jetson Nano itself, but the task is not easy and we spent many days without success.

NVIDIA is already providing users with prebuilt TensorFlow wheels through the download center ([url]https://developer.nvidia.com/embedded/downloads#?search=tensorflow[/url]) and through some threads in the forum.

Since you probably already have the build pipeline set up to build python wheels, I wanted to ask if it was possible to also release the C API binaries (and maybe even C++ binaries).

Thanks in advance,
Gabriele

1 Like

Hi,

Sorry that we don’t have an official C++ package now.
We will pass your request to our internal team.

Here is a topic sharing the steps to build TensorFlow C++ library on Jetson:
[url]https://devtalk.nvidia.com/default/topic/1055131/jetson-agx-xavier/building-tensorflow-1-13-on-jetson-xavier/[/url]

Thanks.

a C/C++ build for the Jetson TX2 would be great, too.

Can I third that. It would be much appreciated.

1 Like

Hi,

Thanks for your suggestion.
We have passed this request to our internal team : )

Echoing this request. Working on the TX2 and really need C/C++ TF API.

Since this seems to be a common request, I’m making my Jetson Nano binary files of Tensorflow 1.13.1 for the C API public. You can download them from here:
https://github.com/jens-totemic/tensorflow/releases/tag/v1.13.1

18B Aug 19 01:26 libtensorflow.so -> libtensorflow.so.1
    23B Aug 19 01:26 libtensorflow.so.1 -> libtensorflow.so.1.13.1
   303M Aug 14 13:07 libtensorflow.so.1.13.1
    28B Aug 19 01:25 libtensorflow_framework.so -> libtensorflow_framework.so.1
    33B Aug 19 01:24 libtensorflow_framework.so.1 -> libtensorflow_framework.so.1.13.1
    19M Aug 14 13:07 libtensorflow_framework.so.1.13.1

After lots of unsuccessful cross-compilation attempts, I have resorted to builing the binaries directly on the Jetson Nano, similar to what was described at https://devtalk.nvidia.com/default/topic/1055131/jetson-agx-xavier/building-tensorflow-1-13-on-jetson-xavier/.

The key to success was:

  1. Switching the compiler to gcc 5.5.0 - neither gcc 7.3.0 nor gcc 6.5.0 can compile Tensorflow 1.13.1 due to internal gcc errors (Fail to build from source with gcc 7.3.1 · Issue #25323 · tensorflow/tensorflow · GitHub)
sudo update-alternatives --remove-all gcc
sudo update-alternatives --remove-all g++
sudo apt-get install gcc-5 g++-5 gcc-7 g++-7

#set comiler version to gcc-5 and g++-5
sudo update-alternatives --config gcc
sudo update-alternatives --config g++
  1. Attaching an external SSD to the Nano - I initially tried to empty out a 16 GB SD card running the Jetson Nano image but the Tensorflow build process takes way more space than you can free up on the SD card.
sudo mount /dev/sda1 /data/
  1. Setting up swap memory on the external SSD
    I added 8 GB swap space, moved all Tensorflow source and bazel output to external drive mounted to /data
sudo dd if=/dev/zero of=/data/swapfile bs=1024 count=8388608
sudo swapon /data/swapfile
  1. Forcing bazel to write it’s output to the external SSD, using
export TEST_TMPDIR=/data/bazelcache
  1. Using this Tensorflow build configuration
sudo ln -s /usr/local/cuda-10.0 /usr/local/cuda

tensorflow-1.13.1$ ./configure

Please specify the location of python. [Default is /usr/bin/python]:

Found possible Python library paths:
  /usr/local/lib/python2.7/dist-packages
  /usr/lib/python2.7/dist-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages]
/usr/lib/python2.7/dist-packages
Do you wish to build TensorFlow with XLA JIT support? [Y/n]:
XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]:
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]:
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: Y
CUDA support will be enabled for TensorFlow.

Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]:

Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:

Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 

Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 

Do you wish to build TensorFlow with TensorRT support? [y/N]: 
No TensorRT support will be enabled for TensorFlow.

Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]: 

Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]: 5.3

Do you want to use clang as CUDA compiler? [y/N]:
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:

Do you wish to build TensorFlow with MPI support? [y/N]:
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]:

Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:
Not configuring the WORKSPACE for Android builds.
  1. And finally compile, excluding support for AWS (breaks the build) and NCCL (breaks and not needed).
bazel build --config opt --config=noaws --config=nonccl --jobs 4 --ram_utilization_factor 50 --verbose_failures //tensorflow/tools/lib_package:libtensorflow

Thanks to compiling from SSD, the build goes through relatively quickly (a few hours).
The direct download link to the binaries is https://github.com/jens-totemic/tensorflow/releases/download/v1.13.1/tensorflow-1.13.1-nano-gpu.tar.xz

Thank you so much J__T for the instructions on how to build the libraries, and especially for providing the pre-built binaries for TF 1.13.1!

Hopefully NVIDIA will provide future versions so that we don’t have to build them manually anymore, but this is a huge help for people who need a working API library and don’t have time/resources to build it themselves.

Hi,

Thanks for all of the feedback.
This request is passed to our internal team and is prioritized.

Thanks.

Hi J__T,

Is the pre-built library can be used for C++ API ? And where should I put it in the Jetson Nano, please ?

Hi,

YES. You can check this link:
[url]https://github.com/jens-totemic/tensorflow/releases/tag/v1.13.1[/url]

Thanks.

Hi AastaLLL,
are there any updates for official libraries (or at least an official build process) from NVIDIA?

Thanks

Hi,

Sorry for keeping you waiting.

This request is still under our internal reviewing.
Thanks and sorry for the inconvenience.

Now that you own ARM i hope this is prioritized? :)

Hi, in case anyone would like, I have posted a pre-built binary of version 1.15 of the TensorFlow C API library here:
https://github.com/nevillerichards/tensorflow/releases/tag/v1.15

1 Like

Hi,

Just wanted to pass along a pre built TF 2.x lib that I found:

Scroll down to the instructions and you’ll see the link to a google drive