I am running a simple Optix Prime program that runs the Closest hit program. I created my own function to set the origin and direction of rays in a buffer. The problem I have is that when I use the RTP_BUFFER_TYPE_HOST as my buffer type all the rays that should be intersecting with the triangles are not. However, if I run with the CUDA_LINEAR buffer type I get the intersections I am expecting. I check the buffer that contains the rays before I execute the Query and what I find is the origin and direction of all the rays look correct in the buffer so I am not sure what the problem is. If I could get help with one of the following that would be great:
- Keep using the HOST buffer type but fix the problem stated above so that the rays hit when they should be hitting.
- Use the CUDA_LINEAR buffer type and change the CUDA code that generates the ray’s origin and direction to what I want it to be. I am not sure how to update this so if this is the better option could someone give me some tips on how to do this?
Also, Is there an advantage to using one buffer type over the other?
What you’re asking for is demonstrated inside the OptiX 4.1.1 examples primeSimple resp. the C++ wrapped version primeSimplePP.
If you look at the implementation of the Buffer class used in that (inside putil/Buffer.h), it can allocate host or device memory.
The default bufferType in those apps is host memory, so if those applications run on your system and produce an output.ppm image which contains a cow image with colors visualizing the normal vectors, then something in your application is not working properly.
You can switch the ray buffers to use CUDA buffers with a command line parameter in both versions of the application which will allocate the ray buffer on the device using the CUDA kernels in the resp. primeKernels.cu files.
The calling function createRaysOrtho() shows the ray generation for either buffer types.
Please check if these programs work for you with the rays in host and device buffers.
If yes, I would recommend to start from there and see what’s different in your application.
The benefit of using the CUDA linear buffers for the rays is that you you can generate the rays in parallel with the GPU directly inside device memory. Means it’s a lot faster to generate new rays and won’t be affected by PCI-E bandwidth for transfers between host and device memory needed when doing this with the CPU in host memory.
Also read this https://devtalk.nvidia.com/default/topic/1023548/optix/crash-when-using-rtp_buffer_type_cuda_linear-buffer-type-in-optix-prime/
which describes an issue with CUDA linear data for model data though.
Thank you I eventually just restarted my computer and changed the driver my GPU was running on and it ended up fixing my problem!