Hi,
I got an error that cupti.h was not found during the Tensorflow build.
nvidia@tegra-ubuntu:~/work/tf_build/tensorflow$ dpkg -L cuda-cupti-10-2
/.
/usr
/usr/local
/usr/local/cuda-10.2
/usr/local/cuda-10.2/targets
/usr/local/cuda-10.2/targets/aarch64-linux
/usr/local/cuda-10.2/targets/aarch64-linux/lib
/usr/local/cuda-10.2/targets/aarch64-linux/lib/libcupti.so.10.2.11
/usr/local/cuda-10.2/targets/aarch64-linux/lib/libnvperf_host.so
/usr/local/cuda-10.2/targets/aarch64-linux/lib/libnvperf_target.so
/usr/share
/usr/share/doc
/usr/share/doc/cuda-cupti-10-2
/usr/share/doc/cuda-cupti-10-2/changelog.Debian.gz
/usr/share/doc/cuda-cupti-10-2/copyright
/usr/local/cuda-10.2/targets/aarch64-linux/lib/libcupti.so.10.2
/usr/local/cuda-10.2/targets/aarch64-linux/lib/libcupti.so
Can you please tell me how I can install cupti.h?
I wrote following version package.
SDKM : sdkmanager_1.0.1-5538_amd64.deb
DriveSoftware :10.0(rev.1)
Dear @Masayuki.Kamoda ,
Note that header files and samples of libraries are removed in DRIVE SW 10.0 to save file system space. You may check manually copying the needed header files. May I know why you are looking to install Tensorflow on DRIVE AGX? Did you check using TensorRT on DRIVE AGX to optmize your TF models?
1 Like
Dear SivaRamaKrishnaNV
You may check manually copying the needed header files.
Thank you! Where is that file?
May I know why you are looking to install Tensorflow on DRIVE AGX? Did you check using TensorRT on DRIVE AGX to optmize your TF models?
I use my original network.
TensorRT could not be used due to limited number of supported APIs.
I asked below question, but it is not resolved
Linux version : Ubuntu 16.04 LTS
GPU type : GeForce GTX 1080
nvidia driver version : 410.72
CUDA version : 9.0
CUDNN version : 7.0.5
Python version [if using python] : 3.5.2
Tensorflow version : tensorflow-gpu 1.9
TensorRT version : 5.0.2.6
Actual Problem,
I tried the example script under samples/python/uff_ssd folder. The Script downloads SSD_inception model, creates uff parser, builds engine and performs inference on Image.
Now, instead of downloading a pre-trained model, I trained m…
Dear @Masayuki.Kamoda ,
Could you check at /usr/local/cuda/include on host machine
1 Like
Dear @Masayuki.Kamoda ,
Can we close this topic?
Dear SivaRamaKrishnaNV
Yes. The build has been completed and the operation is being checked.