MVAPICH2 build error on Jetson TK1

I tried to compile MVAPICH2 with CUDA support, but “./configure --enable-cuda --with-cuda=/usr/local/cuda-6.5” failed with

checking cuda.h usability… no
checking cuda.h presence… no
checking for cuda.h… no
checking cuda_runtime_api.h usability… no
checking cuda_runtime_api.h presence… no
checking for cuda_runtime_api.h… no
configure: WARNING: Specified --enable-cuda switch, but could not
configure: WARNING: find appropriate support
configure: error: Cannot continue
configure: error: ./configure failed for contrib/hwloc

CUDA is installed in /usr/local/cuda-6.5 and cuda.h is located in /usr/local/cuda-6.5/include.

What did I do wrong?


I’m not familiar with MVAPICH2, but it isn’t unusual for the version of CUDA to get in the way. CUDA 6.5 only functions on L4T R21.1 through R21.3 (R21.x). Assuming your Jetson is running L4T R21.x, does MVAPICH2 work with CUDA 6.5?

According to the MVAPICS2 users guide, it works with CUDA 4.0 and later. I didn’t find anything about 6.x, so I’m not sure.

A lot can change in going from CUDA 4 to 6.5. There may have been layout changes in directory structure of 6.5 (since 4.0) which breaks the configure script properly detecting what it thinks it needs. The CUDA API would have also had significant change going from 4 to 6.5. This is only speculation, but probably MVAPICS2 will need some minor changes (at minimum) to work with CUDA 6.5 (at least the configure script will need to be adjusted for 6.5 file/directory layout…API could be dealt with once compile is attempted).

I just received a reply from mvapich2-discuss. It seems that CUDA support is only for Infiniband systems.

Sorry to raise this thread from the dead… but I am having the same problem on the TK1 and openMPI.

Openmpi 1.10.2 and the latest Jetpack 2.0 with L4T 21.4

using the configure trigger ./configure --with-cuda does not find cuda.h or cuda_runtime_api.h
using the configure tirgger ./configure --with-cuda=/usr/local/cuda-6.5 does not find cuda.h or cuda_runtime_api.h either.

However, I found a stack exchange article where someone tried to symlink cuda.h and cuda_runtime_api.h to /usr/include and then it configured fine.

I did the same thing, created symlinks of those to header files to /usr/include and my ./configure got past cuda.h successfully, but not past cuda_runtime_api.h

Any suggestions?

OpenMPI 1.6.5 is available on the Jetson TK1 already (not sure if it was on my board after the JetPack installation or if I installed the package later), is there some functionality in 1.10.2 that you need specifically?

First to answer cstotts question. Yes, the reason to download and build 1.10.2 is that OpenMPI 1.6.5 does not support CUDA, so that’s a great reason to upgrade. (the earliest version that has CUDA support was 1.7, but it was incomplete)

Second, I discovered that CUDA is in fact building just fine, the ./configure reported that cuda support was not found, but after running ./configure and then ‘make install all’ the sample cuda mpi programs work.

./configure was just complaining for no reason, apparently.