Difficulties linking a JNI CUDA shared object to Java loadLibrary unsatisfiedLinkError always occurs

We have a very large Java modelling application at work, the great majority of its cpu time is spent computing FFTs.

This app has too much work put into it to just toss and start over with C/CUDA to get a speed up. But the idea is that

since so much of the time is spent doing FFTs, JNI could call to CUDA to do the FFTs and the Java could remain as is.

My platform is: CUDA 2.2 (driver, sdk,toolkit), openjdk, gcc 4.3.2, Fedora Linux (Fedora 10 x86_64), Netbeans 6.1 IDE,

and the card is a 8800GTX.

The JNI calls some CUDA CUFFT functions like cufftPlan1d() and so on (the C shared object is compiled out of

the .cu file and the Java app loads this via System.loadLibrary(String) to do the FFTs.

I think the so file is found and looked into but I always get at runtime:

I’ve used System.getProperty() to confirm that the java.library.path is correct (ie the directory that contains my JNI CUDA so file)

“boolean isCudaAvailable()” is the first method in the so file I use, I wrote it to determine if the host has CUDA support (CUDA capable card + toolkit/driver). If not, I fall back to the legacy math library the java code is already using.

I pass the so file path to the java app via -Djava.library.path=/ on the command line

My suspicion is that JNI wants the function call like this in the CUDA so file:

blahblah_Java_Packagename_Classname_jnimethod (in my case :

blahblah_Java_fftdetail_FFTJNIImpl_isCudaAvailable()Z )

but the nvcc compiler is setting up the function name differently than how g++ would do it?

This is my JNI so Makefile:

Is there a problem with my arguments in calling nvcc that makes this not work? I’m thinking the nvcc section might be missing a critical option.



I bet your code needs more extern “C”. Name mangling of functions still happens with host-compilation=C on 2.2 (which sure seems like a bug).

I added the --host-compilation C to the nvcc section after the problem was happening when I made a guess that the nvcc was misinterpreting my .cu file

as c++ and so I wanted to try forcing it to “C”. It didn’t seem to help or hurt.

Your extern C idea is sort like something someone here suggested, I noticed the JNI header has extern “C” in it as it’s

created by the javah -jni step but those are only exposed if a #define __cplusplus exists.

.cu files are compiled with a C++ compiler regardless of the --host-compilation flag. So, if you’re compiling something with nvcc without extern “C”, the names exported are probably _Z7blahblah123 or similarly mangled. Putting extern “C” in your function definitions will probably solve this.


It works. I put in the extern “C” around the functions in the cu file and then looked over the so file with “nm” and could see the names all were

corrected ( c++ garbling removed )

00000000000010c0 T Java_fftdetail_FFTJNIImpl_backward

0000000000000fa0 T Java_fftdetail_FFTJNIImpl_forward

0000000000000f50 T Java_fftdetail_FFTJNIImpl_isCudaAvailable

then I was able to link to the JNI .so functions in my Main.java and run the test. Unfortunately the two FFTs (JMSL and JNI/CUDA) come up with radically different numbers but this could be because I use a bizarre random set of 40 floats(complex numbers ranging from [0-100.0 +/- 0.0-100i ] and not a smooth, well bahaved array of floats to put into the FFT calculations to try. But that’s a whole other issue unrelated to this, I’ll probably need a better input data set.

I also turned up a typo where a couple JNI function names were had the wrong package name embedded due to the orginal Java class having changed packages. I’m not really using the JNI features of netbeans(if any) so it wasn’t there to catch the name problem.