How to install tensorflow on the jetson tx1?

We bought this nvidia tx1 to do deep learning with a GPU.
But the most popular deep learning toolkit (tensorflow) does not install on it because they do not support ARM chips.
We tried to build from source but ran into problems.

https://github.com/tensorflow/tensorflow/issues/4070

It would seem that if nvidia is really interested in competing in the deep learning market they would make a stronger effort to ensure that developers can use their hardware to do deep learning without so much pain.

Currently there seems to be a lot of focus on caffe but nobody in the deep learning community is using caffe anymore.
Everyone is using tensorflow now.

Please explain how we can get tensorflow running on your hardware.

Just FYI, that URL refers to a TK1…the TX1 is the version capable of 64-bit. The TK1 with the ARM Cortex-A15 is purely 32-bit. Installing the 64-bit sample rootfs to a TK1 would fail. The 32-bit systems have had more support because armhf has been around a lot longer than aarch64. I am not a Java guru, but Java should be ok in a purely 32-bit environment for a TK1, but may have issues with getting a bit confused if running on a TX1 where kernel is 64-bit but user space is 32-bit…pure 64-bit is much more likely to succeed for Java software. When using a TX1 make sure you are using the purely 64-bit version for Java.

In terms of CUDA, the TK1 supports up to CUDA 6.5. The reason TK1 does not support later versions is that starting with version 7 CUDA is meant only for 64-bit systems. Hardware accelerated libraries may hit a snag if mixing CUDA versions.

Once those dependencies are met it should be possible to build the aarch64 version.

I messed up.
We have purchased the tx1 not the tk1.
Sorry for the confusion.
Tensorflow doesn’t currently provide any support for any arm chips at all.
Neither 32 nor 64 bit versions.

Here is the complete list of what they support:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/get_started/os_setup.md

Notice they provide support for nvidia gpus but only when paired with an x86 chip.
This makes the nvidia jetson dev kits completely useless for deep learning.

When building from source, the piece that is currently failing to build is native code that is used to generate java code.
It is a plugin to protoc.
The grpc-java team who maintain this component do not currently support builds for arm.
And this component is necessary for tensorflow.

It would be very nice if nvidia would consider contributing to the tensorflow project to ensure that this software can run on the gpu dev kits they are selling.

There’s a lot in there I don’t know about, but CUDA 7.5 is available on JTX1. I don’t know why it would require being on x86 unless it does something like depend on the PCIe bus instead of direct CUDA calls. Not having packages designed to install to armv8-a would be a “show stopper” if the source is not available…how much of this is available just as source code?

I know nothing about gRPC-java, but I suspect if you have the pure 64-bit environment this might be a reasonable thing to port. I guess a big question is if any RPC calls are architecture-dependent.

i was also experimenting with getting tensorflow to run on my tx1 for a couple months and decided to move on with other projects for awhile but i was following this discussion when i was actively trying. thing is one needs a strict 64bit setup and that affected other things i wanted. might help to follow this,

as an aside, i also have a pine64 board which is allwinner A64 CPU 64bit ARM and somebody on that forum did get it to build but has not reported on operation much. the 851 issue page also has reports of successful build on tx1 but problems with running it. so, there is light with getting it operational i think.

Has there been any progress on this? I am also interested in installing Tensorflow on a Jetson TX1

Hi,

Please find the information in this topic:
https://devtalk.nvidia.com/default/topic/999726

Thanks.