I want to compile tensorflow from source with the dGPU support to run TF-TRT integration. But, installing bazel is giving me a bunch of errors. I know that there is a pre-compiled .whl for Jetson AGX, can anyone share how to accomplish the same on DRIVE ?
You can try Bazel-0.10.0 which is working for us previously.
Here are some resource for your reference:
1. Install tool and dependency
Please check this script for information:
2. Patch for TensorFlow
Please remember to apply this patch to get it work on ARM environment:
3. GPU architecture for DRIVE AGX
Please enter the corresponding GPU architecture to get it work on DRIVE AGX platform:
[i]Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,5.2]
edit: This is not reproducible for r1.8, and I am able to install and run tf 1.8 successfully with this
@AastaLLL, thanks for the information, i was able to install bazel 0.10.0 with the instructions. But, after applying patch for tf r1.12, when i run
bazel build -c opt --local_resources 3072,4.0,1.0 --verbose_failures --config=cuda //tensorflow/tools/pip_package:build_pip_package, i get the following error:
ERROR: /home/ubuntu/tensorflow/third_party/gpus/cuda_configure.bzl:121:1: file '@bazel_tools//tools/cpp:windows_cc_configure.bzl' does not contain symbol 'setup_vc_env_vars' ERROR: error loading package '': Extension file 'third_party/gpus/cuda_configure.bzl' has errors ERROR: error loading package '': Extension file 'third_party/gpus/cuda_configure.bzl' has errors INFO: Elapsed time: 0.364s FAILED: Build did NOT complete successfully (0 packages loaded)
This must be to do with the different GPU architecture, can you tell me where i need to make those changes?
@AastaLLL Is there a way to cross compile for DRIVE AGX, wherein I build TF on more powerful GPU(1080Ti etc), as building TF on DRIVE takes >3hrs. This is the closest thing I have found for RPi
It’s still recommended to build TensorFlow on the DRIVE platform directly.
We have tested cross-compiler before but it introduced lots of issues.
Can you share the steps for building TensorFlow on the DRIVE platform directly? I tried to follow the installation steps in “tensorflow.org”, but not successful in PX2 (for other systems this steps works fine).
Please follow the discussion on https://devtalk.nvidia.com/default/topic/1049100/general/tensorflow-installation-on-drive-px2-/post/5324624/#5324624 thread. Thanks.
I had tried this link, Its for Python2, but this whl (tensorflow-1.13.1-cp27-cp27mu-linux_aarch64.whl) file involves only a normal tensorflow, not tensorflow-GPU version.
Can you share a whl for python 3.5, TensorFlow-GPU (version >10), for Drive PX2?
It would be also helpful if you could share some examples for the Python TensorFlow-GPU, object detection in Drive PX2. I had tired but the performance was not good “https://devtalk.nvidia.com/default/topic/1051626/general/performance-of-drive-px2-in-comparison-to-titan-xp-need-help-/post/5337927/#5337927”
Hi I am trying to install Tensorflow 1.13 on the Drive AGX Xavier. From my understanding there is still no wheel for this available online.
Hence, I am trying to build from Tensorflow 1.13 source. The forums seem to point to the script here: https://github.com/AastaNV/JEP/blob/master/script/TensorFlow_1.6/tf1.6_build_from_source.sh but it no longer exists.
Is there any other set of instructions or does the official instructions by Tensorflow work fine on the Drive Xavier?
Thanks in advance.
We don’t support TensorFlow for DRIVE AGX officially so don’t have wheel for it.
We removed the script for Jetson because we don’t keep maintain it for newer version. You need to follow https://www.tensorflow.org/install/source to install it from source. Thanks!