Isaac Sim Docker Container Stuck

Installation of docker container stalls at the following point. We have tried with several different workstations and driver configurations. The log file does not provide more information unfortunately.

The CPU (only one kernel at 100% load) and GPU (5-7% avg) usage is very low as well as up- and download volume.

$ sudo docker run --name isaac-sim --entrypoint bash -it --gpus all -e "ACCEPT_EULA=Y" --rm --network=host     -v /etc/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json     -v /etc/vulkan/implicit_layer.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json     -v /usr/share/glvnd/egl_vendor.d/10_nvidia.json:/usr/share/glvnd/egl_vendor.d/10_nvidia.json     -v ~/docker/isaac-sim/cache/ov:/root/.cache/ov:rw     -v ~/docker/isaac-sim/cache/pip:/root/.cache/pip:rw     -v ~/docker/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw     -v ~/docker/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw     -v ~/docker/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw     -v ~/docker/isaac-sim/config:/root/.nvidia-omniverse/config:rw     -v ~/docker/isaac-sim/data:/root/.local/share/ov/data:rw     -v ~/docker/isaac-sim/documents:/root/Documents:rw     nvcr.io/nvidia/isaac-sim:2022.2.0
root@ThinkStation:/isaac-sim $ ./runheadless.native.sh 

The NVIDIA Omniverse License Agreement (EULA) must be accepted before
Omniverse Kit can start. The license terms for this product can be viewed at
https://docs.omniverse.nvidia.com/app_isaacsim/common/NVIDIA_Omniverse_License_Agreement.html

[Info] [carb] Logging to file: /root/.nvidia-omniverse/logs/Kit/Isaac-Sim/2022.2/kit_20230109_161414.log
2023-01-09 16:14:14 [60ms] [Warning] [omni.ext.plugin] [ext: omni.drivesim.sensors.nv.lidar] Extensions config 'extension.toml' doesn't exist '/isaac-sim/exts/omni.drivesim.sensors.nv.lidar' or '/isaac-sim/exts/omni.drivesim.sensors.nv.lidar/config'
2023-01-09 16:14:14 [60ms] [Warning] [omni.ext.plugin] [ext: omni.drivesim.sensors.nv.radar] Extensions config 'extension.toml' doesn't exist '/isaac-sim/exts/omni.drivesim.sensors.nv.radar' or '/isaac-sim/exts/omni.drivesim.sensors.nv.radar/config'
[0.334s] [ext: omni.stats-0.0.0] startup
[0.351s] [ext: omni.rtx.shadercache-1.0.0] startup
[0.357s] [ext: omni.assets.plugins-0.0.0] startup
[0.357s] [ext: omni.gpu_foundation-0.0.0] startup
2023-01-09 16:14:14 [349ms] [Warning] [carb] FrameworkImpl::setDefaultPlugin(client: omni.gpu_foundation_factory.plugin, desc : [carb::graphics::Graphics v2.11], plugin : carb.graphics-vulkan.plugin) failed. Plugin selection is locked, because the interface was previously acquired by: 
[0.361s] [ext: carb.windowing.plugins-1.0.0] startup
2023-01-09 16:14:14 [351ms] [Warning] [carb.windowing-glfw.plugin] GLFW initialization failed.
2023-01-09 16:14:14 [351ms] [Warning] [carb] Failed to startup plugin carb.windowing-glfw.plugin (interfaces: [carb::windowing::IGLContext v1.0],[carb::windowing::IWindowing v1.3]) (impl: carb.windowing-glfw.plugin)
[0.362s] [ext: omni.kit.renderer.init-0.0.0] startup

Driver versions:

$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.11    Driver Version: 525.60.11    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA RTX A5000    Off  | 00000000:01:00.0 Off |                  Off |
| 30%   30C    P8    14W / 230W |    412MiB / 24564MiB |      3%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

OS Version: Ubuntu 20.04.5 LTS
Docker version 20.10.22, build 3a2c30b

Hi.

Please check if you have the file at this path: /etc/vulkan/icd.d/nvidia_icd.json.
It could be at /usr/share/vulkan/icd.d/nvidia_icd.json instead.

You may need to edit the docker run mount flags to match you host path.

See the steps for AWS which is slightly different for desktops.
https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/install_advanced_cloud_setup_aws.html#container-deployment

Both files were not in their expected positions. Just found the hint in the instructions…

For anyone stumbling across this, here is a fast check:

$ cat /etc/vulkan/icd.d/nvidia_icd.json 
$ cat /etc/vulkan/implicit_layer.d/nvidia_layers.json

If both files could not be detected you need to use the /usr/share/... locations, cf. Container Installation: Step 7. Run - Note

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.