CUDA runtime API errors after upgrading to 4.0

Hello,

I have recently been trying to upgrade from 3.2 to 4.0

I updated my driver to the latest driver. This allowed me to compile most of the SDK examples, though some are still failing because of missing files and such that I can’t seem to figure out why they are missing from the tar I downloaded.

Further, most of the SDK examples fail.

In example the boxFilter example, I get the following:

[boxFilter] starting…
./boxFilter Starting…

Loaded ‘…/…/…/src/boxFilter/lenaRGB.ppm’, 1024 x 1024 pixels

freeglut (./boxFilter): Unable to create direct context rendering for window ‘CUDA Box Filter’
This may hurt performance.
Error: failed to get minimal extensions for demo
This sample requires:
OpenGL version 1.5
GL_ARB_vertex_buffer_object
GL_ARB_pixel_buffer_object

and oceanFFT

[dcole@epa1 release]$ ./oceanFFT
[CUDA FFT Ocean Simulation]

Left mouse button - rotate
Middle mouse button - pan
Right mouse button - zoom
‘w’ key - toggle wireframe
[CUDA FFT Ocean Simulation]
freeglut (./oceanFFT): Unable to create direct context rendering for window ‘CUDA FFT Ocean Simulation’
This may hurt performance.
Error: failed to get minimal extensions for demo
This sample requires:
OpenGL version 1.5
GL_ARB_vertex_buffer_object
GL_ARB_pixel_buffer_object
oceanFFT.cpp(671) : cudaSafeCall() Runtime API error 33: invalid resource handle.

Those are just two examples. matrixMul gives me a “FAILED” too, but does seem to run at least.

As far as the CUDA code I have written, it seems like on the very first cudaMalloc of any of those, I get the following type of thing:

…/common/cuLeefilter.cu(30) : cudaSafeCall() Runtime API error 47: device kernel image is invalid

or

cuYamaguchi.cu(156) : cudaSafeCall() Runtime API error 11: invalid argument.

or something like that.

What is going wrong?

I guess I should mention devicequery runs fine. here is the output

Found 1 CUDA Capable device(s)

Device 0: “GeForce GTX 580”
CUDA Driver Version / Runtime Version 4.0 / 4.0
CUDA Capability Major/Minor version number: 2.0
Total amount of global memory: 1535 MBytes (1609760768 bytes)
(16) Multiprocessors x (32) CUDA Cores/MP: 512 CUDA Cores
GPU Clock Speed: 1.54 GHz
Memory Clock rate: 2004.00 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 786432 bytes
Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048)
Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and execution: Yes with 1 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Concurrent kernel execution: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support enabled: No
Device is using TCC driver mode: No
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 3 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 4.0, CUDA Runtime Version = 4.0, NumDevs = 1, Device = GeForce GTX 580
[deviceQuery] test results…
PASSED

So i realized that a couple of the OGL examples I was trying to run were failing beacuse I was trying to do it over SSH.

My software still fails though, even after trying to install the developer driver from the CUDA4 site. This is a downgrade from the latest driver, which is what I had installed before.

Is there a way to ensure that your runtime executable is actually using the version of cuda you specified in LD_LIBRARY_PATH?