Jetson Nano Cuda Samples: cudaErrorUnknown

Hi,
I am encountering issues when trying to run some cuda programs on my Jetson Nano b01. These include some cuda samples as well, for example 2_graphics/volumeRender or 2_Graphics/simpleGL. Here is the output of volumeRender:

$ ./volumeRender
CUDA 3D Volume Render Starting...

GPU Device 0: "Maxwell" with compute capability 5.3

Read './data/Bucky.raw', 32768 bytes
Press '+' and '-' to change density (0.01 increments)
      ']' and '[' to change brightness
      ';' and ''' to modify transfer function offset
      '.' and ',' to modify transfer function scale

CUDA error at volumeRender.cpp:436 code=999(cudaErrorUnknown) "cudaGraphicsGLRegisterBuffer(&cuda_pbo_resource, pbo, cudaGraphicsMapFlagsWriteDiscard)"

I also tried some samples from /usr/src/nvidia/graphics_demos/: gears-basic and bubbles. Both fail with: EGL failed to obtain display.

I followed the instructions on Getting Started With Jetson Nano Developer Kit | NVIDIA Developer to download and flash the SD card image. I later ran apt update & upgrade and also installed the following packages (+dependencies) with apt: nvidia-jetpack, mesa-utils, libglew-dev, tigervnc-standalone-server tigervnc-xorg-extension. I set up a vncserver session on display :1 and connected to that from my pc with xtigervnc-viewer. The GUI works fine and I can run glxgears with no issues.

$ uname -a
Linux jetson 4.9.253-tegra #1 SMP PREEMPT Sat Feb 19 08:59:22 PST 2022 aarch64 aarch64 aarch64 GNU/Linux
/usr/local/cuda/samples/1_Utilities/deviceQuery$ ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA Tegra X1"
  CUDA Driver Version / Runtime Version          10.2 / 10.2
  CUDA Capability Major/Minor version number:    5.3
  Total amount of global memory:                 3964 MBytes (4156661760 bytes)
  ( 1) Multiprocessors, (128) CUDA Cores/MP:     128 CUDA Cores
  GPU Max Clock rate:                            922 MHz (0.92 GHz)
  Memory Clock rate:                             13 Mhz
  Memory Bus Width:                              64-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS

glxinfo does not mention ‘nvidia’ at all. Is this normal?

$ glxinfo | egrep -i '(nvidia|version)'
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
    Version: 20.0.8
    Max core profile version: 3.3
    Max compat profile version: 3.1
    Max GLES1 profile version: 1.1
    Max GLES[23] profile version: 3.1
OpenGL core profile version string: 3.3 (Core Profile) Mesa 20.0.8
OpenGL core profile shading language version string: 3.30
OpenGL version string: 3.1 Mesa 20.0.8
OpenGL shading language version string: 1.40
OpenGL ES profile version string: OpenGL ES 3.1 Mesa 20.0.8
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10
    GL_EXT_shader_implicit_conversions, GL_EXT_shader_integer_mix,

I read about setting __GLX_VENDOR_LIBRARY_NAME=nvidia but this gives a different error:

/usr/local/cuda/samples/2_Graphics/volumeRender$ __GLX_VENDOR_LIBRARY_NAME=nvidia ./volumeRender
CUDA 3D Volume Render Starting...

X Error of failed request:  BadValue (integer parameter out of range for operation)
  Major opcode of failed request:  152 (GLX)
  Minor opcode of failed request:  24 (X_GLXCreateNewContext)
  Value in failed request:  0x0
  Serial number of failed request:  31
  Current serial number in output stream:  32

Did I somehow manage to break the pre-installed cuda runtime / nvidia driver? Any help would be greatly appreciated!

I tried removing and purging all of the nvidia packages that came pre-installed with the SD-card image and followed the JetPack OTA update instructions: https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/updating_jetson_and_host.html#wwpID0E0KL0HA

Unfortunately the issue persists.
I now have the following packages installed related to nvidia:

$ apt list --installed | grep nvidia

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

libnvidia-container-tools/stable,now 1.7.0-1 arm64 [installed]
libnvidia-container0/stable,now 0.10.0+jetpack arm64 [installed]
libnvidia-container1/stable,now 1.7.0-1 arm64 [installed]
nvidia-container/stable,now 4.6.1-b110 arm64 [installed,automatic]
nvidia-container-csv-cuda/stable,now 10.2.460-1 arm64 [installed,automatic]
nvidia-container-csv-cudnn/stable,now 8.2.1.32-1+cuda10.2 arm64 [installed,automatic]
nvidia-container-csv-tensorrt/stable,now 8.2.1.8-1+cuda10.2 arm64 [installed,automatic]
nvidia-container-csv-visionworks/stable,now 1.6.0.501 arm64 [installed,automatic]
nvidia-container-runtime/stable,now 3.7.0-1 all [installed,automatic]
nvidia-container-toolkit/stable,now 1.7.0-1 arm64 [installed,automatic]
nvidia-cuda/stable,now 4.6.1-b110 arm64 [installed,automatic]
nvidia-cudnn8/stable,now 4.6.1-b110 arm64 [installed,automatic]
nvidia-docker2/stable,now 2.8.0-1 all [installed,automatic]
nvidia-jetpack/stable,now 4.6.1-b110 arm64 [installed]
nvidia-l4t-3d-core/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-apt-source/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-bootloader/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-camera/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-configs/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-core/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-cuda/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-firmware/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-gputools/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-graphics-demos/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-gstreamer/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-init/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-initrd/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-jetson-io/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-jetson-multimedia-api/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-kernel/stable,now 4.9.253-tegra-32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-kernel-dtbs/stable,now 4.9.253-tegra-32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-kernel-headers/stable,now 4.9.253-tegra-32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-libvulkan/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-multimedia/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-multimedia-utils/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-oem-config/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-tools/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-wayland/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-weston/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-x11/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-l4t-xusb-firmware/stable,now 32.7.1-20220219090432 arm64 [installed]
nvidia-opencv/stable,now 4.6.1-b110 arm64 [installed,automatic]
nvidia-tensorrt/stable,now 4.6.1-b110 arm64 [installed,automatic]
nvidia-visionworks/stable,now 4.6.1-b110 arm64 [installed,automatic]
nvidia-vpi/stable,now 4.6.1-b110 arm64 [installed,automatic]

Any ideas? Thanks

Hi,

Have you tried to export the DISPLAY variable?
For example, the following command works on my side:

$ export DISPLAY=:1
$ ./x11/gears
 running for 5.000000 seconds...
Total FPS: 60.200001

Thanks.

Hi,

Yes, I made sure that DISPLAY matches the vnc server session:

$ echo $DISPLAY
:1.0
$ vncserver --list

TigerVNC server sessions:

X DISPLAY #     PROCESS ID
:1              7587

When starting volumeRender for example I can see the window open for a very brief moment before it closes again.

Just in case it was not clear in my first post: I am running the jetson headless. Might this be the issue?

The issue was indeed me running it headless.
This helped me: JetPack 4.3: MESA-LOADER: failed to open swrast while in xrdp session - #24 by sorlando961

In summary: I had to start a new X server on :0 and use x11vnc to bind the vncserver to :0 (instead of using (tiger) vncserver to creating a new session on :1. I was then able to render to :0 and run my applications on the GPU:

$ sudo service lightdm stop

then on separate shells:

$ X
$ x11vnc -auth /var/run/lightdm/root/\:0 -geometry 1920x1080

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.