16-bit float texture with cuda driver cause CRT runtime error...and strange behaviour

Hi. In my application i need 1D 16-bit float textures, so i have to use driver-api. The rest of code use common CUDA API, not a driver API.

I do like this:

CUDA_ARRAY_DESCRIPTOR desc;

desc.Format = CU_AD_FORMAT_HALF;

desc.NumChannels = 4;

desc.Width = n;

desc.Height = 0; // if use desc.Height=1 i gain driver error 190 on cuMemcpyHtoA

CU_SAFE_CALL(cuArrayCreate(&hrt.m_cuArrayPos, &desc));

float4* tmpVertex = (float4*)malloc(n*sizeof(float4));

// fill array tmpVertex by values

CU_SAFE_CALL(cuMemcpyHtoA(hrt.m_cuArrayPos,0,tmpVertex,n*sizeof(float4)));

free(tmpVertex);

on the cuMemcpyHtoA my application draw a black screen for a moment (like a console window display full screen in a second).

And next it craches on any cuda call with error R6030 CRT not initialisated.

Some-thing i do wrong or may be it is not possible to mix a CUDA API with CUDA driver API?

Forward Thanks!

I think that is actually your GPU being reset - thus producing no output at all ;)

Obviously after the crash no other cuda call makes any sense to the wiped-out GPU…

I think I remember reading somewhere in the manual that runtime API and driver API cannot be mixed. Could be mistaken though…

Also, are you sure 16-bit float textures are supported?

btw. you are creating a cudaArray, but not a texture!

yep, shure 16-bit textures supported in driver API. And yes, i know that i create cudaArray. I want to bind it to texture in future.
But, if they CUDA API and driver API could not be mixed…this is really Bad! External Image

edit:
Alternative i think that i can create 16-bit texture in D3D but i dont want to do that if 16-bit float textures could be created with driver-API.
btw: i use CUDA 2.3