No warning/error when mistakenly cudaMalloc&cudaFree a class which constructor&destructor only declare with __host__ but no __device__?

no warning/error when mistakenly cudaMalloc&cudaFree a class which constructor&destructor only declare with __host__ but no __device__?
and what will happen inside CUDA when this happen?
thanks!

as in : https://developer.nvidia.com/blog/separate-compilation-linking-cuda-device-code/
if use this in cpp:

    cudaMalloc(&devPArray, n*sizeof(particle));
    cudaMemcpy(devPArray, pArray, n*sizeof(particle), cudaMemcpyHostToDevice);

should the constructor & desctructor of class particle be declare with nothing or __host__ __device__?

If you need to be able to call the constructor and destructor in kernel code and cpu code, you need __host__ __ device__.
However, cudaMalloc and cudaFree do not call constructors or destructors.

thanks! then do you know the common reason cause error cudaErrorMemoryAllocation when cudaMalloc or cudaGraphicsGLRegisterImage while there is enough gpu memory there.

You may find cudaMemGetInfo useful.

used this, that’s why i said there is enough gpu memory there, but got cudaErrorMemoryAllocation…

Can you post a minimal reproducer? How much memory is reported free with cudaMemGetInfo, and how much do allocate when it fails?

total is 6G, cudaMemGetInfo return available is almost 5G after the returnErrorCode==2(cudaErrorMemoryAllocation) by cudaGraphicsGLRegisterImage or cudaMalloc/cudaMalloca3DArray.
the application delete the window after render and recreate the window which use CUDA-OpenGL-interop, this loops many days, now after loop half a day(eg. 50 thounsands times) got this CUDA error, test a few times and each test not exact 50 thousands but around it, get the error.
thanks very much!