I’ve modified the simplePrimepp example to find the intersections of user-defined rays with user-input model from a ply file.
The code works very well with relatively small models (~15MB) having 296925 verts and 595326 triangles. But an error occurs when I try with bigger data sets (~150MB) having 4340082 verts and 8556131 triangles.
I created the context on the GPU and tried to put the verts/tris point either on GPU or CPU.
Context context = Context::create(RTP_CONTEXT_TYPE_CUDA); Model model = context->createModel(); model->setTriangles(data.ntris, RTP_BUFFER_TYPE_HOST, data.tris, data.nverts, RTP_BUFFER_TYPE_HOST, data.verts); //model->setTriangles(data.ntris, RTP_BUFFER_TYPE_CUDA_LINEAR, d_tris, data.nverts, RTP_BUFFER_TYPE_CUDA_LINEAR, d_verts); model->update( 0 );
An error occurred with error code 999 and message Function “_rtpModelUpdate” caught C++ standard exception: bad allocation: out of memory
Some questions I want to ask are:
- _rtpModelUpdate doesn't use _rtpModelFinish implicitly if hint is 0, right? 'cause according to the API doc, there is some mem constraint on using _rtpModelFinish (i.e. 3x mem of final accel. struc. and up to 2GB). But even if this was the case, I would think I'm way below that limit.
- Am I restricted by some memory constraints? Anything to do with paged-locked memory
- Windows 7 Home Premium & Professional, 64-bit
- Optix 3.5.1
- CUDA toolkit 5.5
- (doesn't work on) GeForce GTX 480 (Dedicated/Total GPU mem =1536/4095MB) and GT 640 (1024/4096MB)
- (works on) GeForce GT 555M (3072/6885MB) <-- WHY? more dedicated mem?
Any help would be greatly appreciated!