Wheel file build process for TF 1.13 on Jetson TX2 - JP4.2

Hi,

Could you please elaborate the build process for the following Tensorflow wheel file on Jetson TX2 flashed with Jetpack 4.2?

https://developer.download.nvidia.com/compute/redist/jp/v42/tensorflow-gpu/tensorflow_gpu-1.13.1+nv19.5-cp36-cp36m-linux_aarch64.whl

I am trying to compile Tensorflow with GPU support.

Thank you.

Hi,

You can follow the steps shared in this topic:
[url]https://devtalk.nvidia.com/default/topic/1055131/jetson-agx-xavier/building-tensorflow-1-13-on-jetson-xavier/[/url]

Thanks.

Hi,

The build fails.
I am getting the following log on TX2:

INFO: From ProtoCompile tensorflow/core/protobuf/replay_log.pb.cc:
bazel-out/aarch64-opt/genfiles/external/protobuf_archive/src: warning: directory does not exist.
bazel-out/aarch64-opt/genfiles/external/protobuf_archive/src: warning: directory does not exist.
bazel-out/aarch64-opt/genfiles/external/protobuf_archive/src: warning: directory does not exist.
bazel-out/aarch64-opt/genfiles/external/protobuf_archive/src: warning: directory does not exist.
bazel-out/aarch64-opt/genfiles/external/protobuf_archive/src: warning: directory does not exist.
bazel-out/aarch64-opt/genfiles/external/protobuf_archive/src: warning: directory does not exist.
bazel-out/aarch64-opt/genfiles/external/protobuf_archive/src: warning: directory does not exist.
bazel-out/aarch64-opt/genfiles/external/protobuf_archive/src: warning: directory does not exist.
tensorflow/core/protobuf/replay_log.proto: warning: Import tensorflow/core/protobuf/cluster.proto but not used.
tensorflow/core/protobuf/replay_log.proto: warning: Import tensorflow/core/framework/graph.proto but not used.
ERROR: /home/nvidia/Documents/tensorflow/tensorflow/python/BUILD:4057:1: Linking of rule ‘//tensorflow/python:_pywrap_tensorflow_internal.so’ failed (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command
(cd /home/nvidia/.cache/bazel/_bazel_nvidia/38f510ce87073e6e7989e37d45036c24/execroot/org_tensorflow &&
exec env -
LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:
PATH=/usr/local/cuda-10.0/bin:/home/nvidia/.local/bin:/home/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
PWD=/proc/self/cwd
external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -shared -o bazel-out/host/bin/tensorflow/python/_pywrap_tensorflow_internal.so ‘-Wl,-rpath,$ORIGIN/…/…/_solib_local/_U_S_Stensorflow_Spython_C_Upywrap_Utensorflow_Uinternal.so___Utensorflow’ ‘-Wl,-rpath,$ORIGIN/…/…/_solib_local/_U@local_Uconfig_Ucuda_S_Scuda_Ccublas___Uexternal_Slocal_Uconfig_Ucuda_Scuda_Scuda_Slib’ ‘-Wl,-rpath,$ORIGIN/…/…/_solib_local/_U@local_Uconfig_Ucuda_S_Scuda_Ccusolver___Uexternal_Slocal_Uconfig_Ucuda_Scuda_Scuda_Slib’ ‘-Wl,-rpath,$ORIGIN/…/…/_solib_local/_U@local_Uconfig_Ucuda_S_Scuda_Ccudart___Uexternal_Slocal_Uconfig_Ucuda_Scuda_Scuda_Slib’ -Lbazel-out/host/bin/_solib_local/_U_S_Stensorflow_Spython_C_Upywrap_Utensorflow_Uinternal.so___Utensorflow -Lbazel-out/host/bin/_solib_local/_U@local_Uconfig_Ucuda_S_Scuda_Ccublas___Uexternal_Slocal_Uconfig_Ucuda_Scuda_Scuda_Slib -Lbazel-out/host/bin/_solib_local/_U@local_Uconfig_Ucuda_S_Scuda_Ccusolver___Uexternal_Slocal_Uconfig_Ucuda_Scuda_Scuda_Slib -Lbazel-out/host/bin/_solib_local/_U@local_Uconfig_Ucuda_S_Scuda_Ccudart___Uexternal_Slocal_Uconfig_Ucuda_Scuda_Scuda_Slib -Wl,–version-script bazel-out/host/bin/tensorflow/python/pywrap_tensorflow_internal_versionscript.lds ‘-Wl,-rpath,$ORIGIN/,-rpath,$ORIGIN/…’ -Wl,-soname,_pywrap_tensorflow_internal.so -Wl,-z,muldefs -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -pthread -Wl,-rpath,…/local_config_cuda/cuda/lib64 -Wl,-rpath,…/local_config_cuda/cuda/extras/CUPTI/lib64 -Wl,-S -Wl,-no-as-needed -Wl,-z,relro,-z,now ‘-Wl,–build-id=md5’ ‘-Wl,–hash-style=gnu’ -no-canonical-prefixes -fno-canonical-system-headers -B/usr/bin -Wl,–gc-sections -Wl,@bazel-out/host/bin/tensorflow/python/_pywrap_tensorflow_internal.so-2.params)
collect2: error: ld returned 1 exit status
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 15305.330s, Critical Path: 726.77s, Remote (0.00% of the time): [queue: 0.00%, setup: 0.00%, process: 0.00%]
INFO: 7964 processes: 7964 local.
FAILED: Build did NOT complete successfully

Hi,

failed (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command

Have you followed the steps shared in comment #2.
Please build TensorFlow without NCCL option:

bazel build --config=opt <b>--config=nonccl</b> ...

Thanks.

Hi,

Yes I am using the same configuration. The .so creation succeeds but I am not able to run a sample.I have also shared an example file on:

It gives undefined references to TF libraries. We would appreciate if a pre-buit C++ TF package is given on Jetson TX2.

Thank you

Hi,

Sorry that we don’t have a concrete schedule for providing C++ package.
Maybe you can find more information with the user in topic 1055131 directly.

Thanks.

And

Thank you for your support.

The link in quote 1 works for v1.13.1 instead of r1.13 on Jetson TX2 flashed with Jetpack 4.2.
Just in case someone needs help.
One can even use a different bazel cache directory and create swap on SD card.

Thank you :-)

Thanks for the update : )