Running depth_anything_v2 from holohub on DGX Spark

Hi,
I am trying to run the depth_anything_v2 application from holoscan on the DGX Spark, but I seem to run into issues with holoviz.

Any ideas what I could do? Thanks a lot

[info] [context.cpp:54] _______________
[info] [context.cpp:54] Vulkan Version:
[info] [context.cpp:54] - available: 1.3.275
[info] [context.cpp:54] - requesting: 1.2.0
[warning] [context.cpp:57] VK_ERROR_EXTENSION_NOT_PRESENT: VK_KHR_display - 0
[error] [gxf_wrapper.cpp:69] Exception occurred when starting operator: ‘holoviz’ - Failed to create the Vulkan instance.
[warning] [entity_executor.cpp:674] Failed to start entity [holoviz]
[warning] [greedy_scheduler.cpp:243] Error while executing entity 47 named ‘holoviz’: GXF_FAILURE
[error] [entity_executor.cpp:789] Entity [holoviz] must be in Started, Tick Pending, Ticking or Idle stage before stopping. Current state is StartPending
[info] [greedy_scheduler.cpp:401] Scheduler finished.
[error] [program.cpp:580] wait failed. Deactivating…
[error] [runtime.cpp:1655] Graph wait failed with error: GXF_FAILURE
[warning] [gxf_executor.cpp:2548] GXF call GxfGraphWait(context) in line 2548 of file /workspace/holoscan-sdk/src/core/executors/gxf/gxf_executor.cpp failed with ‘GXF_FAILURE’ (1)
[info] [gxf_executor.cpp:2563] [Depth Anything V2 App] Graph execution finished.
[error] [gxf_executor.cpp:2571] [Depth Anything V2 App] Graph execution error: GXF_FAILURE
Traceback (most recent call last):
File “/workspace/holohub/applications/depth_anything_v2/depth_anything_v2.py”, line 312, in
main()
File “/workspace/holohub/applications/depth_anything_v2/depth_anything_v2.py”, line 308, in main
app.run()
RuntimeError: Failed to create the Vulkan instance.
[info] [gxf_executor.cpp:432] [Depth Anything V2 App] Destroying context
Non-zero exit code running command: python3 /workspace/holohub/applications/depth_anything_v2/depth_anything_v2.py
Exit code: 1
Non-zero exit code running command: docker run --net host --interactive --tty -u 0:0 -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -v /home/hexagonme/repos/holohub:/workspace/holohub -w /workspace/holohub --gpus all --cap-add CAP_SYS_PTRACE --ipc=host -v /dev:/dev --device-cgroup-rule “c 81:* rmw” --device-cgroup-rule “c 189:* rmw” -e NVIDIA_DRIVER_CAPABILITIES=graphics,video,compute,utility,display -e HOME=/workspace/holohub -e CUPY_CACHE_DIR=/workspace/holohub/.cupy/kernel_cache -e HOLOHUB_BUILD_LOCAL=1 --rm --ipc=host --cap-add=CAP_SYS_PTRACE --ulimit=memlock=-1 --ulimit=stack=67108864 --device /dev/video1 --device /dev/video0 --device /dev/snd/controlC1 --device /dev/snd/controlC0 --device /dev/snd/pcmC1D0c --device /dev/snd/pcmC0D9p --device /dev/snd/pcmC0D8p --device /dev/snd/pcmC0D7p --device /dev/snd/pcmC0D3p --device /dev/snd/timer --device /dev/snd/seq --device /dev/infiniband/rdma_cm --device /dev/infiniband/uverbs3 --device /dev/infiniband/uverbs2 --device /dev/infiniband/uverbs1 --device /dev/infiniband/uverbs0 -v /usr/lib/aarch64-linux-gnu/nvidia:/usr/lib/aarch64-linux-gnu/nvidia --device /dev/nvidia0 --device /dev/nvidia-modeset -v /usr/share/nvidia/nvoptix.bin:/usr/share/nvidia/nvoptix.bin:ro --group-add 44 --group-add 993 --group-add 988 --group-add 29 -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY -e PYTHONPATH=/opt/nvidia/holoscan/python/lib:/workspace/holohub/benchmarks/holoscan_flow_benchmarking --entrypoint=/bin/

Hi,

It seems the VK_KHR_display extension is not found. Maybe there is some problem with the NVIDIA driver installation.
Is any Holohub application using Holoviz working? Could you post the output of nvidia-smi and vulkaninfo?

Andreas

Hi,

Thanks a lot for the quick reply.

Here is the output of nvidia-smi:

Thu Dec 11 13:43:12 2025
±----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.95.05 Driver Version: 580.95.05 CUDA Version: 13.0 |
±----------------------------------------±-----------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GB10 On | 0000000F:01:00.0 Off | N/A |
| N/A 38C P8 3W / N/A | Not Supported | 0% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+

±----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2385 G /usr/lib/xorg/Xorg 18MiB |
| 0 N/A N/A 2680 G /usr/bin/gnome-shell 6MiB |

And this is the top of vulkaninfo:

‘DISPLAY’ environment variable not set… skipping surface info
TU: error: ../src/freedreno/vulkan/tu_knl.cc:385: failed to open device /dev/dri/renderD128 (VK_ERROR_INCOMPATIBLE_DRIVER)

VULKANINFO

Vulkan Instance Version: 1.4.328

Could that be the issue?

I get the exact same holoviz error, as reported above, when running the endoscopy_depth_estimation application in holohub.

Thanks

It seems when you ran the vulkaninfo command the X11 DISPLAY env variable was not set, therefore that error. Could you set the DISPLAY env variable or and run vulkaninfo –summary? This should give you something like that (this one is from a different machine with a different driver, not a DGX Spark). This should list the VK_KHR_display extension.

VULKANINFO

Vulkan Instance Version: 1.3.204

Instance Extensions: count = 19

VK_EXT_acquire_drm_display : extension revision 1
VK_EXT_acquire_xlib_display : extension revision 1
VK_EXT_debug_report : extension revision 10
VK_EXT_debug_utils : extension revision 2
VK_EXT_direct_mode_display : extension revision 1
VK_EXT_display_surface_counter : extension revision 1
VK_KHR_device_group_creation : extension revision 1
VK_KHR_display : extension revision 23
VK_KHR_external_fence_capabilities : extension revision 1
VK_KHR_external_memory_capabilities : extension revision 1
VK_KHR_external_semaphore_capabilities : extension revision 1
VK_KHR_get_display_properties2 : extension revision 1
VK_KHR_get_physical_device_properties2 : extension revision 2
VK_KHR_get_surface_capabilities2 : extension revision 1
VK_KHR_surface : extension revision 25
VK_KHR_surface_protected_capabilities : extension revision 1
VK_KHR_wayland_surface : extension revision 6
VK_KHR_xcb_surface : extension revision 6
VK_KHR_xlib_surface : extension revision 6

Instance Layers: count = 1

VK_LAYER_NV_optimus NVIDIA Optimus layer 1.3.242 version 1

Devices:

GPU0:
apiVersion = 4206834 (1.3.242)
driverVersion = 2244247680 (0x85c48080)
vendorID = 0x10de
deviceID = 0x2231
deviceType = PHYSICAL_DEVICE_TYPE_DISCRETE_GPU
deviceName = NVIDIA RTX A5000
driverID = DRIVER_ID_NVIDIA_PROPRIETARY
driverName = NVIDIA
driverInfo = 535.274.02
conformanceVersion = 1.3.5.0
deviceUUID = b92362b2-d370-b8c4-1906-9cd06ed5e776
driverUUID = c7a06fe8-360b-526e-80cc-067a2a45ae5f

Could you also try to run vkcube and see if that simple Vulkan application is running correctly?

There might be several reasons that depth_anything_2 did not run. One is that the display driver is now working correctly for whatever reason. Another one might be docker setup since depth_anything_2 is running inside a docker container. It would be good to know if vkcubeand vulkaninfo work both bare metal and inside a docker container to isolate the root cause.

Andreas

Setting the display variable gives me the following vulkaninfo –summary output:

==========
VULKANINFO

Vulkan Instance Version: 1.4.328

Instance Extensions: count = 25

VK_EXT_acquire_drm_display : extension revision 1
VK_EXT_acquire_xlib_display : extension revision 1
VK_EXT_debug_report : extension revision 10
VK_EXT_debug_utils : extension revision 2
VK_EXT_direct_mode_display : extension revision 1
VK_EXT_display_surface_counter : extension revision 1
VK_EXT_headless_surface : extension revision 1
VK_EXT_surface_maintenance1 : extension revision 1
VK_EXT_swapchain_colorspace : extension revision 5
VK_KHR_device_group_creation : extension revision 1
VK_KHR_display : extension revision 23
VK_KHR_external_fence_capabilities : extension revision 1
VK_KHR_external_memory_capabilities : extension revision 1
VK_KHR_external_semaphore_capabilities : extension revision 1
VK_KHR_get_display_properties2 : extension revision 1
VK_KHR_get_physical_device_properties2 : extension revision 2
VK_KHR_get_surface_capabilities2 : extension revision 1
VK_KHR_portability_enumeration : extension revision 1
VK_KHR_surface : extension revision 25
VK_KHR_surface_protected_capabilities : extension revision 1
VK_KHR_wayland_surface : extension revision 6
VK_KHR_xcb_surface : extension revision 6
VK_KHR_xlib_surface : extension revision 6
VK_LUNARG_direct_driver_loading : extension revision 1
VK_NV_display_stereo : extension revision 1

Instance Layers: count = 4

VK_LAYER_MESA_device_select Linux device selection layer 1.4.303 version 1
VK_LAYER_MESA_overlay Mesa Overlay layer 1.4.303 version 1
VK_LAYER_NV_optimus NVIDIA Optimus layer 1.4.312 version 1
VK_LAYER_NV_present NVIDIA GR2608 layer 1.4.312 version 1

Devices:

GPU0:
apiVersion = 1.4.312
driverVersion = 580.95.5.0
vendorID = 0x10de
deviceID = 0x2e12
deviceType = PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU
deviceName = NVIDIA Tegra NVIDIA GB10
driverID = DRIVER_ID_NVIDIA_PROPRIETARY
driverName = NVIDIA
driverInfo = 580.95.05
conformanceVersion = 1.4.1.3
deviceUUID = 7e67e374-fd89-8bf9-9d17-30a454984aa7
driverUUID = b92269a1-b525-5615-ab8a-e2095ee37192
GPU1:
apiVersion = 1.4.305
driverVersion = 0.0.1
vendorID = 0x10005
deviceID = 0x0000
deviceType = PHYSICAL_DEVICE_TYPE_CPU
deviceName = llvmpipe (LLVM 20.1.2, 128 bits)
driverID = DRIVER_ID_MESA_LLVMPIPE
driverName = llvmpipe
driverInfo = Mesa 25.0.7-0ubuntu0.24.04.2 (LLVM 20.1.2)
conformanceVersion = 1.3.1.1
deviceUUID = 6d657361-3235-2e30-2e37-2d3075627500
driverUUID = 6c6c766d-7069-7065-5555-494400000000

Looks good from what I can tell in regards to the VK_KHR_display extension.
vkcube is also successful and a cube is rendered.

When I enter the docker container with

docker run --rm -it --entrypoint bash holohub:depth_anything_v2

Neither vulkaninfo nor vkcube are found.

Thanks,

Ralph

Indeed, it looks good outside of the docker container.

vulkaninfo and vkcube are part of the vulkan-tools package. You have to install it in the container using apt update && apt install vulkan-tools. Also make sure to run the container with --runtime nvidia and run nvidia-smi to see if the GPU is available.

I ended up getting it to work by starting the docker container not via dockerhub, but using the docker run command seperately.

It worked fine if I executed this:

docker run --net host --interactive --tty -u 1000:1000 -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -v /home/hexagonme/repos/holohub:/workspace/holohub -w /workspace/holohub --gpus all --cap-add CAP_SYS_PTRACE --ipc=host -v /dev:/dev --device-cgroup-rule “c 81:* rmw” --device-cgroup-rule “c 189:* rmw” -e NVIDIA_DRIVER_CAPABILITIES=graphics,video,compute,utility,display -e HOME=/workspace/holohub -e CUPY_CACHE_DIR=/workspace/holohub/.cupy/kernel_cache -e HOLOHUB_BUILD_LOCAL=1 --rm --ipc=host --cap-add=CAP_SYS_PTRACE --ulimit=memlock=-1 --ulimit=stack=67108864 --device /dev/video1 --device /dev/video0 --device /dev/snd/controlC1 --device /dev/snd/controlC0 --device /dev/snd/pcmC1D0c --device /dev/snd/pcmC0D9p --device /dev/snd/pcmC0D8p --device /dev/snd/pcmC0D7p --device /dev/snd/pcmC0D3p --device /dev/snd/timer --device /dev/snd/seq --device /dev/infiniband/rdma_cm --device /dev/infiniband/uverbs3 --device /dev/infiniband/uverbs2 --device /dev/infiniband/uverbs1 --device /dev/infiniband/uverbs0 -v /usr/lib/aarch64-linux-gnu/nvidia:/usr/lib/aarch64-linux-gnu/nvidia --device /dev/nvidia0 --device /dev/nvidia-modeset -v /usr/share/nvidia/nvoptix.bin:/usr/share/nvidia/nvoptix.bin:ro --group-add 44 --group-add 993 --group-add 988 --group-add 29 -e XDG_SESSION_TYPE -e XDG_RUNTIME_DIR -v /run/user/1000:/run/user/1000 -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY -e PYTHONPATH=/opt/nvidia/holoscan/python/lib:/workspace/holohub/benchmarks/holoscan_flow_benchmarking -e DISPLAY=:1 --runtime=nvidia --entrypoint=/bin/bash holohub-depth_anything_v2:8cba8281cf92 -c “./holohub run depth_anything_v2 --language python --local”

No sure why it gets hung up with holohub….

Thanks a lot for you help Andreas.

Great, glad that I could help.