GPU 0 does not support virtual memory - (core dumped)

Hi There,

I get this error “GPU 0 does not support virtual memory” on Orin AGX,
yet Orin AGX, with 2048 Cuda cores, supports virtual memory, right?

It appears that the error is stem from tiny-cuda-nn library (output below)
I tested and got the same error on the NVlabs/tiny-cuda-nn repo

tiny-cuda-nn warning: GPUMemoryArena: GPU 0 does not support virtual memory. Falling back to regular allocations, which will be larger and can cause occasional stutter.
root@a520461b9c4d:/main/tiny-cuda-nn# ./build/mlp_learning_an_image data/images/albert.jpg data/config_hash.json
Loading custom json config ‘data/config_hash.json’.
Beginning optimization with 10000000 training steps.
tiny-cuda-nn warning: GPUMemoryArena: GPU 0 does not support virtual memory. Falling back to regular allocations, which will be larger and can cause occasional stutter.
terminate called after throwing an instance of ‘std::runtime_error’
what(): /main/tiny-cuda-nn/include/tiny-cuda-nn/cuda_graph.h:99 cudaStreamEndCapture(stream, &m_graph) failed: operation failed due to a previous error during capture
Aborted (core dumped)

btw, I’m using Cuda -12.2
Built on Tue_Jun_13_19:22:54_PDT_2023
Cuda compilation tools, release 12.2, V12.2.91
Build cuda_12.2.r12.2/compiler.32965470_0

Any idea how to solve this?

Hi,

It’s a warning message and might not be the reason causes the failure.

The GitHub is verified on a desktop GPU.
Based on the issue opened below, it seems not supported Jetson officially.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.