Is there a way to know how much GPU memory Optix will use?

hello

I created a program to pair meshes.
Previous question: Question about ray origin, scene epsilon, and closest hit

Is there a way to know roughly how much GPU memory OptiX will use?

An issue occurred where the program died while running in the specifications below.

  • OS: Windows 10 64bit
  • GPU: NVIDIA GeForce GTX 960, 2GB
  • NVIDIA Driver: 536.67
  • OptiX: 7.7
  • CUDA: 12.2.1

Number of meshes used: approximately 110,000

An error occurred as shown below.

I think GPU memory is the problem.
Because it worked on 30 meshes.

Therefore, if the number of meshes exceeds GPU memory due to a large number of meshes, I would like to block it in advance.

I am using compression when building AS (using the optixMeshViewer sample for that part)

Or do you use CUDA_SYNC_CHECK() and is this part problem?

thank you

First, you can see exactly how much memory OptiX will use for acceleration structures because you are allocating the device memory for that depending on the optixAccelComputeMemoryUsage results.

Similar with all other CUDA malloc calls you do inside your application.

The only thing in OptiX where the memory requirements aren’t known to the developer a-priori is the OptiX internal stack size which depends on various parameters of the modules, programs, pipeline, recursion depth, traversal depth and the underlying GPU. The more cores, the more memory is required.
(Note that one must calculate the OptiX stack size explicitly when using callable programs and it’s always recommended anyways.)

On top of that there is some memory allocated by CUDA on the device for the internal resource management when creating a CUDA context.

If you just want some overall memory usage statistics, you can use the CUDA call cudaMemGetInfo() (resp. cuMemGetInfo for the CUDA Driver API).
Another simple method is to run an nvidia-smi command in a command prompt which prints out memory statistics like this:

"C:\Windows\System32\nvidia-smi.exe"
"C:\Windows\System32\nvidia-smi.exe" --format=csv,noheader --query-gpu=timestamp,name,pstate,memory.free,memory.used,utilization.gpu --loop-ms=500

That will show you if the GPU ran out of memory.

Another thing to try is to enable the validation mode inside the OptixDeviceContextOptions which will dump more information to the console to see if there is anything else wrong with your OptiX setup.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.