CUDA code management in the dll texture ref & constant memory issues

Hello everyone!

i’ve found similar topics in the discussions, but some of them are slightly different, others don’t help

so, i have dll with the class implementation using CUDA code

dll project it’s pretty simple:

  1. header file with the class declaration (no CUDA references at all)

  2. .cu file with all of the CUDA code, including texture references and constant memory declaration

  3. .cpp dll interface, containing DllMain

I should say that this class statically linked inside other project works perfectly.

So, i’m attaching this dll via LoadLibrary function. Objects of this class are created in the dll in the .cpp file and passing to the caller as pointers to the class instance. Objects are also deleted inside the dll by passing a pointer from the caller.

First time as i create and delete object everything is ok - cuda kernels works fine and fast =). When i try to create it second time (only after the first one is deleted!) two errors appears:

  1. cudaMemcpyToSymbol returned “invalid device symbol” in this code:
__constant__ float kernel [DIAMETER];

...

void init(){

...

cudaMemcpyToSymbol( "kernel", h_kernel, DIAMETER*sizeof(float) );

...

}

Adding “static device” to declaration didnt help

I commented error checking after cudaMemcpyToSymbol but another error appeared:

  1. “cudaBindTextureToArray” returns “invalid texture reference” in this code:
texture<float, 2, cudaReadModeElementType> tex1;

...

void init(){

...

tex1.addressMode[0] = cudaAddressModeClamp;

tex1.addressMode[1] = cudaAddressModeClamp;

tex1.filterMode = cudaFilterModePoint;

tex1.normalized = false;

cd = cudaCreateChannelDesc( 32, 0, 0, 0, cudaChannelFormatKindFloat );

cudaMallocArray( &(array1), &cd, width, height );

cudaBindTextureToArray( tex1, array1, cd );

...

}

I’m using msvs 2008 @ cuda runtime 3.2 @ win7 64-bit @ geforce gtx 580, drivers was updated today.

Should code inside dll be specially-managed? Maybe there should be some static declarations?

What will happen, if two caller instances will call same dll with CUDA code which uses texture references or constant memory?

edt

I had the same problem.

I could not find out what causes the problem, but a work around can be to call cudaDeviceReset() before LoadLibrary. Yes, it is only a temporary solution, since it deletes every uploaded resources and clears the settings.

Has somebody found the solution? I have the same problem. My code is:

texture<float, 2, cudaReadModeElementType> bilinTexture;

...

extern "C" __declspec(dllexport) cudaError_t __stdcall

interpolate3DCuda( float *output, const float *h_volume, float gain, unsigned int width, unsigned int height, unsigned int depth, unsigned int interpolated_depth)

{

	cudaExtent volumeSize = make_cudaExtent(width, height, depth);

	// create 3D array

	cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc<float>();

	...

	cudaArray *dev_input;

	cutilSafeCall( cudaMallocArray(&dev_input, &channelDesc, volumeSize.width, volumeSize.height) );

	...

	bilinTexture.filterMode = cudaFilterModeLinear;

	bilinTexture.normalized = false;

	cutilSafeCall( cudaBindTextureToArray(bilinTexture, dev_input, channelDesc) );

	...

}

The code works as console application. But when I make DLL and execute it in C# code, I have a error when cudaBindTextureToArray is being executed.

cudaBindTextureToArray is calling in C# code with DllImport syntax:

DllImport("CUDA_DLL_export.dll", EntryPoint = "interpolate3DCuda", CallingConvention = CallingConvention.StdCall)]

public static extern void interpolate3DCuda(float[] slices, float[] h_volume, float gain, uint width, uint height, uint depth, uint interpolated_height);

My configuration is: CUDA 4.0, Windows x64, GeForce 460.

[Sorry, double post]