Is it possible to run Tensorflow on TX2 ?

I was intend to run some deep Neural Network on TX2. Is it possible to run Tensorflow on TX2 ?

With CUDA.

what works is jetson-inference samples:
git clone https://github.com/dusty-nv/jetson-inference

Here’s an article on how to install and run TensorFlow on the Jetson TX2: http://www.jetsonhacks.com/2017/04/02/tensorflow-on-nvidia-jetson-tx2-development-kit/

Yeah. That is Amazing! Thank you for sharing the article!

It seems to be also possible to run\ install Caffe at JTX2: https://github.com/jetsonhacks/installCaffeJTX2

Kangalow, I have noticed that in the Tensor Flow installation video your system displays 6 CPUs and utilised it somehow.
At my side I see 4 CPU and the first is 100% loaded mostly while others are 0% loaded.
Is it a tweak which enables the other processors utilization? Or your tx2 is of different\custom version?

The Jetson TX2 can be configured in different ways in order to maximize performance and/or minimize energy usage. NVIDIA provides the nvpmodel utility which configures the CPU/GPU complex to their most efficient power/compute use. See:

By default at boot, the Denver2 cores are disabled to lower idle power. You can use either ~/jetson_clocks.sh script or nvpmodel utility to enable them. Here’s a previous thread about it https://devtalk.nvidia.com/default/topic/1000345/jetson-tx2/two-cores-disabled-/

And here’s another quality Jetsonhacks article about using the nvpmodel utility:
http://www.jetsonhacks.com/2017/03/25/nvpmodel-nvidia-jetson-tx2-development-kit/

I have tried ./jetson_clocks.sh and now I have tried nvpmodel;
sudo nvpmodel -q –verbose
NV Power Mode: MAXP_CORE_ALL
2

Thank you for pointing to it!

Perhaps it requires a reboot for system monitor reflects other CPU’s as functioning. It is still first loaded 100 and 3 others 0% .
However tegrastars now reflecting:

1433,100%@1431,100%@1420,100%@1421,100%@1421] EMC 15%@1600 APE 150 VDE 1203 GR3D 0%@1122
RAM 4270/7854MB (lfb 3x1MB) SWAP 1616/8192MB (cached 134MB) cpu [100%@1421,100%@1434,100%@1431,100%@1420,100%@1422,100%@1420] EMC 16%@1600 APE 150 VDE 1203 GR3D 0%@1122
Now, after restart, sysmonitor shows all - functioning.
@dusty_nv : thank you for pointing out the way to enable more cores!

Something has happened with the system while the build process of Tensor flow was performing.
Jetson got powered off somehow. Now I continue with the building & corresponding installation steps.
It seems that after powering on of the device no swap file is used anymore.
Shall i delete the previous 8gb file and create a new one?
Now I recreated the swap file and continued the ./build process:

./buildTensorFlow.sh 
WARNING: Sandboxed execution is not supported on your system and thus hermeticity of actions cannot be guaranteed. See http://bazel.build/docs/bazel-user-manual.html#sandboxing for more information. You can turn off this warning via --ignore_unsupported_sandboxing.
INFO: Found 1 target...
Target //tensorflow/tools/pip_package:build_pip_package up-to-date:
  bazel-bin/tensorflow/tools/pip_package/build_pip_package
INFO: Elapsed time: 1.638s, Critical Path: 0.01s

starting again from the step : ./setTensorFlowEV.sh

Screenshot from 2017-04-26 01-52-32.png

The swap file does not persist across system power reset unless so instructed in /etc/fstab

You can reuse the same swap file of course.

The system go powered down again while building TF. Perhaps overheating. Should switch CPU utilization mode to somewhat power safe for avoid that and enable jetson_clocks,sh perhaps.
Starting building again.
All cpu’s are loaded 100%. Network fails repeatedly. By some reason.

/tensorflow/core/lib/strings/proto_text_util.h: In instantiation of 'bool tensorflow::strings::ProtoParseNumericFromScanner(tensorflow::strings::Scanner*, T*) [with T = int]':
bazel-out/host/genfiles/tensorflow/core/framework/op_def.pb_text.cc:643:96:   required from here
./tensorflow/core/lib/strings/proto_text_util.h:167:21: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
Target //tensorflow/tools/pip_package:build_pip_package up-to-date:
  bazel-bin/tensorflow/tools/pip_package/build_pip_package
INFO: Elapsed time: 4675.661s, Critical Path: 4372.14s

Does the above seem as a successful end of the "./buildTensorFlow.sh " execution?
Thanks!
Now I can write HelloWorld &c. using TensorFlow!
More Tutorials\Examples:
http://web.stanford.edu/class/cs20si/lectures/slides_01.pdf
http://web.stanford.edu/class/cs20si/syllabus.html
https://github.com/chiphuyen/tf-stanford-tutorials
https://www.tensorflow.org/get_started/get_started

But there arise some changes with new releases in commands writing:
e.g tf.mul -> tf.multiply &c.
For tracking changes look https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md

Hi,

Thanks for your feedback.

The warning you posted in #13 is from tensorflow and is caused by insecure comparison.
This should be independent to the device.

Do you meet any further error?
Any feedback is appreciated.

I am curious, because I’m trying to collect data on Jetson operating environments:

Are you using the stock heatsink and fan on the Jetson?
Are you using the stock power supply that came with the Jetson?
How warm is the office/area where you keep the Jetson?

Another reason for the system rebooting would be if it runs out of RAM and panics, so it doesn’t necessarily have to be overheating.

Is TensofFlow part of the JetPack 3.0?

TensorFlow is rather an attribute of Google than of TX2, but it can be used with the TX2.

Hi,

Please build tensorflow from source.

Here is some information:
https://devtalk.nvidia.com/default/topic/1000717/jetson-tx2/tensorflow-on-jetson-tx2/

The below link contains the wheel files that are required for the installation of Tensorflow in TX2:

https://github.com/peterlee0127/tensorflow-nvJetson