No CUDA Driver API on the TK1?

Does the TK1 ship with libcuda.so? All the runtime CUDA Samples built without a problem.

I don’t see it:

It is under /usr/lib.

It’s not in /usr/lib/ on my TK1. :(

I even reinstalled the cuda-toolkit. Not there.

On my TK1 it’s in:

/usr/lib/arm-linux-gnueabihf/tegra/libcuda.so

I see it now. Thanks.

But this means the makefile is broken – at least for a newly installed system like mine.

Builds with these temporary hacks to the Makefile:

143,144d142
< CUDA_SEARCH_PATH = /usr/lib/arm-linux-gnueabihf/tegra
<
187c185
<       $(EXEC) $(NVCC) $(ALL_LDFLAGS) $(GENCODE_FLAGS) -o $@ $+ $(LIBRARIES) -L /usr/lib/arm-linux-gnueabihf/tegra
---
>       $(EXEC) $(NVCC) $(ALL_LDFLAGS) $(GENCODE_FLAGS) -o $@ $+ $(LIBRARIES)

Allan, thanks for this. Hopefully, NVIDIA will fix this soon in an update. Curious, what are you using the TK1 for?

I’m just using the TK1 for benchmarking. I have a codebase that runs well on discrete GPUs including the GK208 (2x the cores) and am very curious to see how the kernels perform on the TK1.

It would appear that the same library path issue also applies to various other NVIDIA libraries such as libGL.so and similar…

As root,

echo “/usr/local/cuda/lib” >> /etc/ld.so.conf.d/cuda.conf
ldconfig

This will make the standard CUDA libs visible to the dynamic loader.
As far as the user’s error about not finding libcuda, it would appear his install was incomplete, since I just installed everything in the downloaded cuda-repo-l4t-r19.2_6.0-42_armhf.deb package (which must be done as an added step), and I clearly have the tegra library path added to the loader in /etc/ld.so.conf.d/nvidia-tegra.conf.

The samples under /usr/local/cuda compile and execute fine for me.

-Mark

I properly installed the L4T .deb package. The ldconfig cache already had libcuda.so.

The Runtime API samples all build fine.

The issue is that the Driver API samples (end w/the suffix “Drv”) aren’t building out of the box.

Printing the CUDA_SEARCH_PATH var shows that it’s “/usr/lib” and won’t find libcuda.so with depth of 1.

Furthermore, the “/usr/arm-linux-gnueabihf/lib” line appears to have its directories juxtaposed?

I’m curious if the “*Drv” samples build for you?

While there is something wrong with the Drv makefiles (I hadn’t tried them until you just mentioned it), it isn’t the “maxdepth”. If you’re making any of them inside its directory, you’ll need to “make ARMv7=1”, which would otherwise have been set by the top-level Makefile in /usr/local/cuda/samples. That’s a minor thing. The error in the Makefile is two-fold: the CUDA_SEARCH_PATH is wrong, as you pointed out; it should be corrected to read “/usr/lib/arm-linux-gnueabihf/tegra”. However, in addition, you should add a line immediately below it to adjust the LIBRARIES variable, otherwise CUDA_SEARCH_PATH is all but useless. :) So to repeat the now corrected block:

ifneq ($(DARWIN),)
ALL_LDFLAGS += -Xlinker -framework -Xlinker CUDA
else
CUDA_SEARCH_PATH ?=
ifeq ($(ARMv7),1)
ifneq ($(TARGET_FS),)
LIBRARIES += -L$(TARGET_FS)/usr/lib
CUDA_SEARCH_PATH += $(TARGET_FS)/usr/lib
endif
CUDA_SEARCH_PATH += /usr/lib/arm-linux-gnueabihf/tegra
LIBRARIES += -L$(CUDA_SEARCH_PATH)

Followed by the ‘else’ and the rest as it already was. I tested this with deviceQueryDrv in 1_Utilities with perfect results. It should be easy to fix the other Drv samples. Again, if compiling inside the Drv’s own directory (and not the top-level /usr/local/cuda/samples), remember to “make ARMv7=1”. If in /usr/local/cuda/samples itself, “make” alone sets up the environment and builds fine.
If you have any other problems, just holler. BTW the tabs got removed in my above paste, but you should be able to match up the lines in the Makefile.

Regards,
Mark

Thanks for the fix. :)

I’ll file a bug if someone hasn’t already… Filed #515648.

Thanks, allanmac, for the Makefile edits above. For the 4 *Drv samples, those edits seem to work for me.

But there are also samples that fail with the following message:

ubuntu@tegra-ubuntu:~/NVIDIA_CUDA-6.0_Samples$ make &> make_log.log 
ubuntu@tegra-ubuntu:~/NVIDIA_CUDA-6.0_Samples$ grep WARNING make_log.log 
>>> WARNING - required GPU not available on this platform - waiving sample <<<
>>> WARNING - required GPU not available on this platform - waiving sample <<<
WARNING - CUDA OpenMP Libraries are not found
WARNING - CUDA OpenMP Libraries are not found
WARNING - No MPI compiler found.
>>> WARNING - required GPU not available on this platform - waiving sample <<<
>>> WARNING - required GPU not available on this platform - waiving sample <<<
>>> WARNING - required GPU not available on this platform - waiving sample <<<
>>> WARNING - required GPU not available on this platform - waiving sample <<<
>>> WARNING - required GPU not available on this platform - waiving sample <<<
>>> WARNING - required GPU not available on this platform - waiving sample <<<
>>> WARNING - required GPU not available on this platform - waiving sample <<<

cdpSimpleQuicksort is an example…

WARNING - required GPU not available on this platform - waiving sample <<<
make[1]: Entering directory `/home/ubuntu/NVIDIA_CUDA-6.0_Samples/0_Simple/cdpSimpleQuicksort’

Did these build for you? Has anyone gotten these to build?

This seems to be the relevant part of the Makefile… Seems to indicate these can’t be built with ARM processors? Is that true?

################################################################################

EXEC   ?=

# CUDA code generation flags

GENCODE_SM35    := -gencode arch=compute_35,code=sm_35
GENCODE_SM50    := -gencode arch=compute_50,code=sm_50
GENCODE_SMXX    := -gencode arch=compute_50,code=compute_50
ifeq ($(OS_ARCH),armv7l)
$(info >>> WARNING - required GPU not available on this platform - waiving sample <<<)
EXEC          := @echo "@"
else
GENCODE_FLAGS   ?= $(GENCODE_SM35) $(GENCODE_SM50) $(GENCODE_SMXX)
endif

ALL_CCFLAGS += -dc

LIBRARIES += -lcudadevrt

################################################################################

The OpenMP issues look like they should have been fixed as of CUDA 5.5:

http://docs.nvidia.com/cuda/cuda-samples/#axzz38jKwejtT

libgomp* appears to exist, but is just not being found for the samples requiring OpenMP?

ubuntu@tegra-ubuntu:~/NVIDIA_CUDA-6.0_Samples$ sudo find / -name "*gomp*"
[sudo] password for ubuntu: 
/usr/share/doc/libgomp1
/usr/share/doc/gcc-4.8-base/gomp
/usr/share/doc/gcc-4.8-base/test-summaries/libgomp.sum.gz
/usr/lib/arm-linux-gnueabihf/libgomp.so.1.0.0
/usr/lib/arm-linux-gnueabihf/libgomp.so.1
/usr/lib/gcc/arm-linux-gnueabihf/4.8/libgomp.spec
/usr/lib/gcc/arm-linux-gnueabihf/4.8/libgomp.a
/usr/lib/gcc/arm-linux-gnueabihf/4.8/libgomp.so
/var/lib/dpkg/info/libgomp1:armhf.md5sums
/var/lib/dpkg/info/libgomp1:armhf.shlibs
/var/lib/dpkg/info/libgomp1:armhf.symbols
/var/lib/dpkg/info/libgomp1:armhf.list
/var/lib/dpkg/info/libgomp1:armhf.postrm
/var/lib/dpkg/info/libgomp1:armhf.postinst

I’m able to compile all the samples except these 9:

0_Simple/cdpSimpleQuicksort
0_Simple/cdpSimplePrint
5_Simulations/cdpAdvancedQuicksort
6_Advanced/interval
6_Advanced/cdpLUDecomposition
6_Advanced/cdpBezierTessellation
6_Advanced/cdpQuadtree
6_Advanced/StreamPriorities
7_CUDALibraries/simpleDevLibCUBLAS

They all give me this same error (presumbly due to the check for the ARM architecture):

>>> WARNING - required GPU not available on this platform - waiving sample <<<

From:

http://docs.nvidia.com/cuda/cuda-samples/#axzz38sh8s2N2

It appears at least 8/9 of these require CUDA compute capability of 3.5, but the Jetson is only 3.2 Does anyone know if there’s a workaround for this?