Compiling CUDA looks for wrong CUDA version

On Ubuntu, I previously had an installation of CUDA 6.5, and wanted to upgrade to CUDA 7.0. So, I deleted the directory at /usr/local/cuda-6.5, and installed CUDA 7.0 into /usr/local/cuda-7.0. I then changed the symbolic link at /usr/local/cuda to point to /usr/local/cuda-7.0. In my bash.rc file, I also updated the environment variables accordingly:

export CUDA_HOME=/usr/local/cuda-7.0
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64
export PATH=${CUDA_HOME}/bin:${PATH}

If I type in “nvcc --version”, then I get the following as expected:

Cuda compilation tools, release 7.0, V7.0.27

However, I am now compiling some code (the Caffe deep learning library, to be precise) which uses CUDA, and I am getting the following error message:

error while loading shared libraries: libcudart.so.6.5: cannot open shared object file: No such file or directory

So for some reason, it is still looking for the CUDA 6.5 libraries, rather than the CUDA 7.0 libraries. Why is this? How do I tell the compiler to look for the 7.0 libraries? I cannot find any reference to libcudart.so.6.5 in the source code I am compiling, so the CUDA compiler itself is looking for the wrong version.

Have a look at the Makefile you are using,
the error mess is from the compile/link stage,
there is defines in the Makeiles,
look for LIBS=-L/usr/local/cudaXXX

The message you have posted is a runtime message, not a compile-time message. There is some component that is still linked against the old library that you have not properly recompiled or relinked. Try doing a make clean.

If it indeed is like txbob says a runtime error
then you can find it with:
ldd <binary_that_fails>

then there is a line with unresolved libcudart.so.6.5

Correct way is to recompile any libs and binaries used.

If you are really desperate and need to have it working NOW
you can make a symlink from the .so.7.0 to .so.6.5

This is ofcourse NOT supported, but if you are desperate …