cudaGraphicsGLRegisterBuffer errors out on all CUDA GL samples

I setup official Nvidia container nvcr.io/nvidia/tensorflow:23.03-tf2-py3 which has CUDA 12.1 and Tensorflow sees and can use GPU. I installed all X11/GL libs and header files into this container and expose it to host X11 screen. GPU is 4090. All GL CUDA samples fail with the same error:

code=304(cudaErrorOperatingSystem) “cudaGraphicsGLRegisterBuffer(&m_pGRes[i], m_pbo[i], cudaGraphicsMapFlagsNone)”

Nbody sample with -hostmem flag runs fine. There are no other GPUs in a system and I would think that CUDA should be setup correctly in this container. So what could be a problem? Maybe you can’t run these samples from a container?

Apparently what was missing is adding the following to “docker run” “-e NVIDIA_DRIVER_CAPABILITIES=all”. By default only compute is enabled inside container, graphics is disabled.