Error when using pointers in Optix programs

In my code I use bilinear interpolation, the interpolation function takes a point (float2) and 2 pointers (int2*,float*) for neighbour pixels and their weights. So before calling the interpolation function I create two pointers and allocate memory as follows

int2* pixels = new int2[4];
float* weights = new float[4];

When I run this code sometimes it throws the following error

terminate called after throwing an instance of 'optix::Exception'
  what():  Unknown error (Details: Function "RTresult _rtContextLaunch2D(RTcontext, unsigned int, RTsize, RTsize)" caught exception: Encountered a CUDA error: cudaDriver().CuMemcpyDtoHAsync( dstHost, srcDevice, byteCount, hStream.get() ) returned (700): Illegal address
================================================================================
Backtrace:
	(0) () +0x711547
	(1) () +0x70f8fb
	(2) () +0x30fe51
	(3) () +0x5bb6f3
	(4) () +0x5bc4c4
	(5) () +0x1c81bf
	(6) () +0x1c8946
	(7) () +0x1c9557
	(8) () +0x17a69b
	(9) rtContextLaunch2D() +0x2b9
	(10) main() +0x203
	(11) __libc_start_main() +0xf0
	(12) _start() +0x29

================================================================================
)
Aborted (core dumped)

I was able to avoid the error by creating array instead of pointers as follows

int2 pixels[4];
float weights[4];

Just out of curiosity, I want know why this error is being thrown. Is it allowed to allocate memory like I did?

Please read the whole OptiX Programming Guide.
Dynamic memory allocation is not supported in OptiX device code as explained here:
[url]http://raytracing-docs.nvidia.com/optix/guide/index.html#caveats#14001[/url]