CUDA 4.2 Install in Ubuntu 12.04

This post is to give a complete picture on how to install and test CUDA 4.2 on Ubuntu 12.04 64 bit. I have found that everything that needs to be done is quite scattered and needs bringing together to make CUDA more accessible to more novice users such as my self. So if this doesn’t work for you please post and let us know and if you know a quicker or better way please enlighten us!!!



I hope you find this useful and I would like to credit my sources:



The NVIDIA CUDA Getting Started Guide (Linux)

Note one must have a nVidia GPU which is CUDA capable

Check is the OS is 32 or 64 bit by running

uname -m

in a terminal. i686 denotes a 32-bit system, and x86_64 denotes a 64-bit one.

Proceed to CUDA Downloads page:

For the toolkit, I chose the one titled Ubuntu 11.04, and save all three files in an easy to access location, like your Home folder.

Make sure the requisite tools are installed using the following command :

sudo apt-get install freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libgl1-mesa-glx libglu1-mesa libglu1-mesa-dev

Next, blacklist the required modules (so that they don’t interfere with the driver installation)

gksu gedit /etc/modprobe.d/blacklist.conf

Add the following lines to the end of the file, one per line:

blacklist amd76x_edac

blacklist vga16fb

blacklist nouveau

blacklist rivafb

blacklist nvidiafb

blacklist rivatv

Save the file and exit gedit.

In order to get rid of any nVidia residuals, run the following command in a terminal:

sudo apt-get remove --purge nvidia*

Once it’s done, reboot your machine. At the login screen, don’t login just yet. Press Ctrl+Alt+F1 to switch to a text-based login. Login and switch to the directory which contains the downloaded drivers, toolkit and SDK. Run the following commands:

sudo service lightdm stopchmod +x devdriver*.run

Follow the onscreen instructions. If the installer throws up an error about nouveau kernel still running, allow it to create a blacklist for nouveau, quit the installation and reboot. In that case, run the following commands again:

sudo service lightdm stopsudo ./devdriver*.run

The installation should now proceed smoothly. When it asks you if you want the 32-bit libraries and if you want it to edit xorg.conf to use these drivers by default, allow both.

Reboot once the installation completes.

Next, enter the following in a terminal window (in the directory where the files are stored):

chmod +x cudatoolkit*.runsudo ./cudatoolkit*.run

where cudatoolkit*.run is the full name of the toolkit installer. I recommend leaving the installation path to its default setting (/usr/local/cuda) unless you have a specific reason for not doing so.

The following lines must be added into the .bashrc file in your home directory:

export PATH=/usr/local/cuda/bin:$PATH

export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

export LD_LIBRARY_PATH=/usr/lib/nvidia-current:$LD_LIBRARY_PATH

export CUDA_ROOT=/usr/local/cuda/bin

SDK must be installed as a regular user (and not as root) to prevent access issues with the SDK files.

Once the toolkit is installed, enter the following in a terminal:

chmod +x gpucomputingsdk*.run./gpucomputingsdk*.run

where gpucomputingsdk*.run is the full name of the SDK installer. Again, follow the instructions onscreen to complete the installation.

Since there are several linking errors in this compilation several modifications must be make. You are made aware of this when you change your working directory to NVIDIA_GPU_Computing_SDK in your home directory and then run:


This will result in the following error:

…/…/lib/librendercheckgl_x86_64.a(rendercheck_gl.cpp.o): In function CheckBackBuffer::checkStatus(char const*, int, bool)':rendercheck_gl.cpp:(.text+0xfbb): undefined reference to gluErrorString’

The fix is contained in the following three steps.

  1. change the order for all occurrences of RENDERCHECKLIB in BOTH /C/common/ and /CUDALibraries/common/ like this:







and add -L…/…/…/C/lib in RENDERCHECKGLLIB definition line:





RENDERCHECKGLLIB := -L../../../C/lib -lrendercheckgl_$(LIB_ARCH)$(LIBSUFFIX)


  1. In all files that use UtilNPP (boxFilterNPP, imageSegmentationNPP, freeImageInteropNPP, histEqualizationNPP), the makefiles have the wrong order of library linking ($LIB is before the source/out files), so go to the /CUDALibraries/src/*NPP/Makefile and change the order like (note this is an example of the freeImageInteropNPP file modification, all others are done similarly):


$(CXX) $(INC) $(LIB) -o freeImageInteropNPP freeImageInteropNPP.cpp -lUtilNPP_$(LIB_ARCH) -lfreeimage$(FREEIMAGELIBARCH)



$(CXX) $(INC) -o freeImageInteropNPP freeImageInteropNPP.cpp $(LIB) -lUtilNPP_$(LIB_ARCH) -lfreeimage$(FREEIMAGELIBARCH)


  1. For randomFog, you also need to add

to the Makefile.

Now go back to the working directory NVIDIA_GPU_Computing_SDK and run


, this will take some time to fully compile to be patient. Also the compiler will throw several warning but these should be ignored.

The version of the CUDA Toolkit can be checked by running

nvcc -V

in a terminal window. The nvcc command runs the compiler driver that compiles CUDA programs. It calls the gcc compiler for C code and the NVIDIA PTX compiler for the CUDA code.

NVIDIA includes sample programs in source form in the GPU Computing SDK. You should compile them all by changing to ~/NVIDIA_GPU_Computing_SDK/C and running


. The resulting binaries will be installed in ~/NVIDIA_GPU_Computing_SDK/C/bin/linux/release.

After compilation, go to ~/NVIDIA_GPU_Computing_SDK/C/bin/linux/release and run


. If the CUDA software is installed and configured correctly, the output for deviceQuery should look similar to that shown in Figure 1 of the reference guide provided by NVIDIA. The exact appearance and the output lines might be different on your system. The important outcomes are that a device was found (the first highlighted line), that the device matches the one on your system (the second highlighted line), and that the test passed (the final highlighted line). If a CUDA-enabled device and the CUDA Driver are installed but deviceQuery reports that no CUDA-capable devices are present, this likely means that the /dev/nvidia* files are missing or have the wrong permissions.


First thanks for this nice step by step tutorial. I have the following problem:

ubuntu x64 11.04

gtx 460m video card

cuda and sdk installed

environment settings for cuda and sdk "




export PATH



when i go to /opt/cuda/C/srs/deviceQuery and type command “make” the following error occur:

"/usr/bin/ls: cannot find -lshrutil_x86_64

collect2: ld returned 1 exit status

make: *** […/…/bin/linux/release/deviceQuery] Error1 "

I installed sdk also as root :(. I hope i didn’t brake anything

Can you help me?

Thanks in advance


I wrote a little tutorial to install Cuda on Ubuntu. It’s made to be as easiest as possible.


Tell me if there is any problem (or if my English is bad).

Thanks !

Thanks fr the help. You also have a nice tutorial. It works now.

Thanks once again.

Have a nice day.

I had a problem with resolution after installing succesfully the driver. Since then, the screen resolution appears to be 640x480 (4:3) in my laptop with Nvidia card Geforce GT 630M. What can I do about this?

Hi sir:

When I installed the driver, I clicked on “Yes” when it asked me about whether to update “X-config”(I can’t remember correctly but it is the last option when installing it). Now I can only access to command line instead of graphic interface. What should I do to fix this? Can you tell me? I’m not very familiar with Ubuntu yet.


thnks for your post. But when I followed these steps the cuda toolkit installed fine. the gpucomputingsdk*.run also installed ok. but when i tried to make in NVIDIA_GPU_Computing_SDK directory i got the follwing errors :

riaz@riaz-Aspire-4755:~/NVIDIA_GPU_Computing_SDK$ make

make[1]: Entering directory `/home/riaz/NVIDIA_GPU_Computing_SDK/shared’

make[1]: Leaving directory `/home/riaz/NVIDIA_GPU_Computing_SDK/shared’

make[1]: Entering directory `/home/riaz/NVIDIA_GPU_Computing_SDK/C’

make[2]: Entering directory `/home/riaz/NVIDIA_GPU_Computing_SDK/C/common’

make[2]: Leaving directory `/home/riaz/NVIDIA_GPU_Computing_SDK/C/common’

make[2]: Entering directory `/home/riaz/NVIDIA_GPU_Computing_SDK/C/common’

make[2]: Leaving directory `/home/riaz/NVIDIA_GPU_Computing_SDK/C/common’

make[2]: Entering directory `/home/riaz/NVIDIA_GPU_Computing_SDK/C/common’

make[2]: Leaving directory `/home/riaz/NVIDIA_GPU_Computing_SDK/C/common’

make[2]: Entering directory `/home/riaz/NVIDIA_GPU_Computing_SDK/shared’

make[2]: Leaving directory `/home/riaz/NVIDIA_GPU_Computing_SDK/shared’

make[2]: Entering directory `/home/riaz/NVIDIA_GPU_Computing_SDK/C/src/threadMigration’

/usr/bin/ld: cannot find -lcuda

collect2: ld returned 1 exit status

make[2]: *** […/…/bin/linux/release/threadMigration] Error 1

make[2]: Leaving directory `/home/riaz/NVIDIA_GPU_Computing_SDK/C/src/threadMigration’

make[1]: *** [src/threadMigration/Makefile.ph_build] Error 2

make[1]: Leaving directory `/home/riaz/NVIDIA_GPU_Computing_SDK/C’

make: *** [all] Error 2

I also followed the steps for RENDERCHECKLIB, UtilNPP and randomFog but got above errors.

plz anyone can give a solution to this.

N.B my system config: ubuntu 12.04 32 bit with core i5 and 6 gb ram.

For people with optimus enabled laptops. In order to install cuda you need to modify the procedure. In this case one needs to install bumblebee which enables optimus support and in the process installs the nvidia driver. The rest is the same. The compiled programs are ran with optirun ./cudaprogram

Well, due to the fact I had an optimus system laptop nvidia card I installed bumblebee (which include drivers) and then I ran the glxspheres and nvidia drivers did it fine. I followed the steps according to the guide and after installing cuda toolkit I got this error while “loading shared libraries: cannot open shared object file: No such file or directory.”

Obviously, I’m using command optirun but still having this error. That library in fact exists but it seems I have no permission to read it or some link is lost… any help please?

if u run from terminal, dont forget to put the script file as written in this document

Thanks for a great tutorial! Now that Cuda install is known to work on 12.04, I will finally upgrade to it.

My only suggestion would be to reconfigure directly, rather than through environment variable LD_LIBRARY_PATH. The main reason is that not every executable one runs on the system honors the environment as specified in shell config files. For example, cron doesn’t load user’s environment before starting jobs. Also, the CUDA install under /usr/local seems more like a system-wide, rather than a per-user change. Hence it’s more logical to change the system as a whole to integrate with CUDA, rather than individual user settings.

In order to configure to use CUDA shared libraries, one needs to drop a text file (one per line) with the paths to the libraries into /etc/ directory:

$ cat /etc/ 



Obviously, the actual directories used for the installation should be referenced in that file. Those are referenced as values of $LD_LIBRARY_PATH variable above.

Then one needs to run ldconfig as root only once (not on every reboot).

The above approach seems rather portable to me. I hence don’t understand why nVidia does not implement the above in the package run script. I would suggest that the run script checks for the existence of /etc/ directory on the system and drops the file with the paths there, which should be followed by the invocation of ldconfig.

As far as I remember, that was it for me.

Hope this helps.

This error suggest that you do not have the library paths in your path variables.

From the first post:

The following lines must be added into the .bashrc file in your home directory:

export PATH=/usr/local/cuda/bin:$PATH

export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

export LD_LIBRARY_PATH=/usr/lib/nvidia-current:$LD_LIBRARY_PATH

export CUDA_ROOT=/usr/local/cuda/bin

Hi, thanks for the tutorial!

I’m having an issue with the sdk compiling, right after we modify the Makefiles and type make in the sdk directory, this message appears after all the warnings:

make -C src/simpleCUFFT/
make[2]: Entering directory /home/gpgpu-sim/NVIDIA_GPU_Computing_SDK/CUDALibraries/src/simpleCUFFT' make[2]: /usr/local/cuda/bin/nvcc: Command not found make[2]: *** [obj/x86_64/release/] Error 127 make[2]: Leaving directory /home/gpgpu-sim/NVIDIA_GPU_Computing_SDK/CUDALibraries/src/simpleCUFFT’
make[1]: *** [src/simpleCUFFT/Makefile.ph_build] Error 2
make[1]: Leaving directory `/home/gpgpu-sim/NVIDIA_GPU_Computing_SDK/CUDALibraries’
make: *** [all] Error 2

I already had a problem related to the nvcc, which i fixed by adding a dynamic link. BUt this time, the compiler refers to an address “isr/local/cuda/bin” which doesn’t even exists since i have cuda installed in “home/cuda”. I’ll appreacite your help, i’ve been stuck with the sdk instllation for days now.

Thank you very much!