Issues with starting Isaac Sim headless in a docker container and Workstation installation

Hello everyone,

I’ve been following both installation guides for the workstation and the container, but i seem to have problems with each one of them.

  • Container Installation (https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/install_container.html)
    In my attempt to install and deploy the Isaac sim container, i’ve encountered a problem when performing step 8 of the guide. When executing ./runheadless.native.sh i get a warning that the EULA must be accepted followed by the statement Illegal instruction (core dumped).
    I already tried using different locations for the nvidia_icd.json and nvidia_layers.json files (as suggested in the guide) and made sure with cat if these files actually exist. Every other step before worked flawlessly.
    Im running the docker container in the following way:
mitarbeiter@vm-gpu-robot-sim:~$ docker run --name isaac-sim --entrypoint bash -it --gpus all -e "ACCEPT_EULA=Y" --rm --network=host \
    -v /usr/share/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json \
    -v /usr/share/vulkan/implicit_layer.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json \
    -v /usr/share/glvnd/egl_vendor.d/10_nvidia.json:/usr/share/glvnd/egl_vendor.d/10_nvidia.json \
    -v ~/docker/isaac-sim/cache/ov:/root/.cache/ov:rw \
    -v ~/docker/isaac-sim/cache/pip:/root/.cache/pip:rw \
    -v ~/docker/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw \
    -v ~/docker/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw \
    -v ~/docker/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw \
    -v ~/docker/isaac-sim/config:/root/.nvidia-omniverse/config:rw \
    -v ~/docker/isaac-sim/data:/root/.local/share/ov/data:rw \
    -v ~/docker/isaac-sim/documents:/root/Documents:rw \
    nvcr.io/nvidia/isaac-sim:2022.2.0
root@vm-gpu-robot-sim:/isaac-sim# ./runheadless.native.sh 

The NVIDIA Omniverse License Agreement (EULA) must be accepted before
Omniverse Kit can start. The license terms for this product can be viewed at
https://docs.omniverse.nvidia.com/app_isaacsim/common/NVIDIA_Omniverse_License_Agreement.html

Illegal instruction (core dumped)
  • Workstation Installation
    I also tried to do a normal Workstation installation but also with no success. I followed each step of the guide but it already fails at running ./omniverse-launcher-linux.AppImage.
    It seems that some files are missing but im not sure how to fix it and what causes this.
    Here is the full log i receive:
[2023-01-20 17:31:13.546] [info]  Omniverse Launcher 1.8.2 (production)
[2023-01-20 17:31:13.563] [info]  Argv: /tmp/.mount_omnive6N0ZyB/omniverse-launcher
[2023-01-20 17:31:13.564] [info]  Crash dumps directory: /home/mitarbeiter/.config/omniverse-launcher/Crashpad
[2023-01-20 17:31:13.795] [debug] Running "/home/mitarbeiter/.local/share/ov/pkg/cache-2022.2.0/System Monitor/omni-system-monitor"
[2023-01-20 17:31:13.813] [debug] Reset current installer.
[2023-01-20 17:31:13.853] [info]  Running production web server.
[2023-01-20 17:31:13.864] [info]  HTTP endpoints listening at http://localhost:33480
[2023-01-20 17:31:13.892] [debug] Sharing: false
[2023-01-20 17:31:13.977] [info]  Started the Navigator web server on 127.0.0.1:34080.
[2023-01-20 17:31:15.065] [info]  Saving omniverse-launcher.desktop file to /tmp/omniverse-launcher-dYkRzA...
[2023-01-20 17:31:15.065] [debug] 
 [Desktop Entry]
Name=omniverse-launcher
Exec="/home/mitarbeiter/Omniverse/omniverse-launcher-linux.AppImage" --no-sandbox %u
Type=Application
Terminal=false
MimeType=x-scheme-handler/omniverse-launcher
[2023-01-20 17:31:15.173] [info]  Saving omniverse.desktop file to /tmp/omniverse-launcher-iE8Yuy...
[2023-01-20 17:31:15.174] [debug] 
 [Desktop Entry]
Name=omniverse-launcher
Exec="/home/mitarbeiter/Omniverse/omniverse-launcher-linux.AppImage" --no-sandbox %u
Type=Application
Terminal=false
MimeType=x-scheme-handler/omniverse
[2023-01-20 17:31:15.267] [error] (node:4766) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, unlink '/home/mitarbeiter/.config/autostart/nvidia-omniverse-launcher.desktop'
(Use `omniverse-launcher --trace-warnings ...` to show where the warning was created)
[2023-01-20 17:31:15.268] [error] (node:4766) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)

This is the information i get from nvidia-smi:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.11    Driver Version: 525.60.11    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:00:10.0 Off |                  N/A |
| 30%   24C    P8    18W / 250W |      5MiB / 11264MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  Off  | 00000000:00:11.0 Off |                  N/A |
| 30%   27C    P8    10W / 250W |      5MiB / 11264MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1108      G   /usr/lib/xorg/Xorg                  4MiB |
|    1   N/A  N/A      1108      G   /usr/lib/xorg/Xorg                  4MiB |
+-----------------------------------------------------------------------------+

I hope a solution can be found to these issues.
Many thanks in advance

Hi. Please share your Isaac Sim and post-install log files.

Can you try running

./isaac-sim.headless.native.sh --allow-root

in the container? Which GPU are you running on?

Also, does Create and Code install and launch successfully from the Launcher?

Well i can’t really install the Omniverse launcher on my Ubuntu machine, so i don’t have any Isaac Sim or post-install log files. Create and Code won’t be installed and launched because i can’t use the launcher. All i get is the error log i posted (which is the only log file i could find in the .nvidia-omniverse directory) and a white screen that looks like this:

Running ./isaac-sim.headless.native.sh --allow-root removes the warning about the EULA agreement but it still gives me Illegal instruction (core dumped)

The GPUs im running on are two GeForce RTX 2080 Tis and im using Ubuntu 20.04.5 LTS

If you can’t run the Isaac Sim container, you might be having a driver issue. Can you try do a clean install of the drivers. Take a look at Linux Troubleshooting — Omniverse Robotics documentation

The Omniverse Launcher install error could be a file permission issue. We will need to move this to the OV Launcher team to take a look at that error.

Please try reinstall the drivers first.

I did a clean install of the nvidia driver and followed the steps in the Linux Troubleshooting page.
Sadly this didn’t help and i still get the same errors inside the Docker container trying to run Isaac Sim headless

I’m not sure if this issue is related to your setup or our container. Can you try running another container below:

docker run --rm --gpus all nvidia/cuda nvidia-smi

I tried running the container, but it was unable to find the image. Thats the response i got:

mitarbeiter@vm-gpu-robot-sim:~$ docker run --rm --gpus all nvidia/cuda nvidia-smi
Unable to find image 'nvidia/cuda:latest' locally
docker: Error response from daemon: manifest for nvidia/cuda:latest not found: manifest unknown: manifest unknown.
See 'docker run --help'.

Sorry. The latest tag no longer exist. Try this:

docker run --rm --gpus all nvidia/cuda:12.0.0-devel-ubuntu20.04 nvidia-smi

Can you also run vkcube shown here.

Now it worked. I could start the container without a problem.
I could also run vkcube and it successfully showed me a spinning cube

Hmm. So, you can run those two container but not Isaac Sim?
Can you check if this file exists and is not empty in the Isaac Sim container?

/etc/vulkan/icd.d/nvidia_icd.json

Also check if this files exists in your host:

/usr/share/vulkan/icd.d/nvidia_icd.json

Correct, i was able to run both of these containers without a problem.
I checked, both files exist and are not empty:

mitarbeiter@vm-gpu-robot-sim:~$ docker run --name isaac-sim --entrypoint bash -it --gpus all -e "ACCEPT_EULA=Y" --rm --network=host -v /usr/share/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json -v /usr/share/vulkan/implicit_layer.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json -v /usr/share/glvnd/egl_vendor.d/10_nvidia.json:/usr/share/glvnd/egl_vendor.d/10_nvidia.json     -v ~/docker/isaac-sim/cache/ov:/root/.cache/ov:rw     -v ~/docker/isaac-sim/cache/pip:/root/.cache/pip:rw     -v ~/docker/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw     -v ~/docker/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw     -v ~/docker/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw     -v ~/docker/isaac-sim/config:/root/.nvidia-omniverse/config:rw     -v ~/docker/isaac-sim/data:/root/.local/share/ov/data:rw     -v ~/docker/isaac-sim/documents:/root/Documents:rw     nvcr.io/nvidia/isaac-sim:2022.2.0
root@vm-gpu-robot-sim:/isaac-sim# cat /etc/vulkan/icd.d/nvidia_icd.json
{
    "file_format_version" : "1.0.0",
    "ICD": {
        "library_path": "libGLX_nvidia.so.0",
        "api_version" : "1.2.175"
    }
}
root@vm-gpu-robot-sim:/isaac-sim# exit
exit
mitarbeiter@vm-gpu-robot-sim:~$ cat /usr/share/vulkan/icd.d/nvidia_icd.json
{
    "file_format_version" : "1.0.0",
    "ICD": {
        "library_path": "libGLX_nvidia.so.0",
        "api_version" : "1.2.175"
    }
}
mitarbeiter@vm-gpu-robot-sim:~$

Can you try running this flag for the Isaac Sim container:

--gpus '"device=0"' or --gpus '"device=1"'

instead of

--gpus all

I tried running with both flags and checked with nvidia-smi if only one GPU is being used, but it still gives the same results when trying to run either ./isaac-sim.headless.native.sh --allow-root or ./runheadless.native.sh:

mitarbeiter@vm-gpu-robot-sim:~$ docker run --name isaac-sim --entrypoint bash -it --gpus '"device=0"' -e "ACCEPT_EULA=Y" --rm --network=host \
    -v ~/docker/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw \
>     -v /usr/share/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json \
>     -v /usr/share/vulkan/implicit_layer.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json \
>     -v /usr/share/glvnd/egl_vendor.d/10_nvidia.json:/usr/share/glvnd/egl_vendor.d/10_nvidia.json \
>     -v ~/docker/isaac-sim/cache/ov:/root/.cache/ov:rw \
>     -v ~/docker/isaac-sim/cache/pip:/root/.cache/pip:rw \
>     -v ~/docker/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw \
>     -v ~/docker/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw \
>     -v ~/docker/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw \
>     -v ~/docker/isaac-sim/config:/root/.nvidia-omniverse/config:rw \
>     -v ~/docker/isaac-sim/data:/root/.local/share/ov/data:rw \
>     -v ~/docker/isaac-sim/documents:/root/Documents:rw \
>     nvcr.io/nvidia/isaac-sim:2022.2.0
root@vm-gpu-robot-sim:/isaac-sim# ./runheadless.native.sh

The NVIDIA Omniverse License Agreement (EULA) must be accepted before
Omniverse Kit can start. The license terms for this product can be viewed at
https://docs.omniverse.nvidia.com/app_isaacsim/common/NVIDIA_Omniverse_License_Agreement.html

Illegal instruction (core dumped)
root@vm-gpu-robot-sim:/isaac-sim# ./isaac-sim.headless.native.sh --allow-root
Illegal instruction (core dumped)
root@vm-gpu-robot-sim:/isaac-sim# nvidia-smi
Wed Jan 25 22:34:54 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.11    Driver Version: 525.60.11    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:00:10.0 Off |                  N/A |
| 30%   25C    P8    18W / 250W |      6MiB / 11264MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
root@vm-gpu-robot-sim:/isaac-sim# exit
exit
mitarbeiter@vm-gpu-robot-sim:~$ docker run --name isaac-sim --entrypoint bash -it --gpus '"device=1"' -e "ACCEPT_EULA=Y" --rm --network=host \
    -v ~/docker/isaac-sim/cache/pip:/root/.cache/pip:rw \
    -v ~/docker/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw \
    -v ~/docker/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw \
    -v ~/docker/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw \
>     -v /usr/share/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json \
>     -v /usr/share/vulkan/implicit_layer.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json \
>     -v /usr/share/glvnd/egl_vendor.d/10_nvidia.json:/usr/share/glvnd/egl_vendor.d/10_nvidia.json \
>     -v ~/docker/isaac-sim/cache/ov:/root/.cache/ov:rw \
>     -v ~/docker/isaac-sim/cache/pip:/root/.cache/pip:rw \
>     -v ~/docker/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw \
>     -v ~/docker/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw \
>     -v ~/docker/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw \
>     -v ~/docker/isaac-sim/config:/root/.nvidia-omniverse/config:rw \
>     -v ~/docker/isaac-sim/data:/root/.local/share/ov/data:rw \
>     -v ~/docker/isaac-sim/documents:/root/Documents:rw \
>     nvcr.io/nvidia/isaac-sim:2022.2.0
root@vm-gpu-robot-sim:/isaac-sim# ./runheadless.native.sh

The NVIDIA Omniverse License Agreement (EULA) must be accepted before
Omniverse Kit can start. The license terms for this product can be viewed at
https://docs.omniverse.nvidia.com/app_isaacsim/common/NVIDIA_Omniverse_License_Agreement.html

Illegal instruction (core dumped)
root@vm-gpu-robot-sim:/isaac-sim# ./isaac-sim.headless.native.sh --allow-root
Illegal instruction (core dumped)
root@vm-gpu-robot-sim:/isaac-sim# nvidia-smi
Wed Jan 25 22:36:19 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.11    Driver Version: 525.60.11    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:00:11.0 Off |                  N/A |
| 30%   27C    P8    10W / 250W |      6MiB / 11264MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
root@vm-gpu-robot-sim:/isaac-sim# exit
exit
mitarbeiter@vm-gpu-robot-sim:~$

I’m out of ideas. Can you describe your setup?
Are you running on bare-metal (like desktop/latop) and have a monitor connected to one of the GPUs? or in a Cloud. It looks like you are on a VM. How was it setup?

The situation is like this: Im not working on my local machine, im using VNC to work remotely on a VM Ubuntu setup, that runs a VNC server.
On my local setup, which has Windows 10 and a RTX 2070 Super, i had no problems installing and using Omniverse.
I have also seen this post, it seems to describe a similar situation like i have with the Workstaion variant but not the Container version.
Could it be that VNC is making issues for the container to run Isaac Sim headless?

Yes. You may be having and issue similar to that post. VNC may not be hardware accelerating the virtual display. Is it possible to try other streaming clients like RDP or No Machine?

So apparently no other remote software tools have worked so far. I was told that i may have access soon to a server to use as a Desktop, where the options for remote software is wider.

Im asking myself, if VNC is the problem for the container not running Isaac Sim, why is it still not working if im simply connecting through SSH and not using a VNC Client to connect? Is it because the VNC Server is still running on that machine?
And is VNC also responsible for the missing files when trying to use the Omniverse Launcher or is something else causing this problem?

It could caused by how the VM was setup or drivers.
I find it odd that you could run the other containers and nvidia-smi but not Isaac Sim headless container.

Can you try remove these two folders below and re-install the drivers again via the .run installer.

/etc/vulkan/
/usr/share/vulkan/

I removed both folders before re-installing the driver again via the .run installer and some things changed.
First i checked if

/etc/vulkan/icd.d/nvidia_icd.json

exists on my host and it does but with different data now:

mitarbeiter@vm-gpu-robot-sim:~$ cat /etc/vulkan/icd.d/nvidia_icd.json
{
    "file_format_version" : "1.0.0",
    "ICD": {
        "library_path": "libGLX_nvidia.so.0",
        "api_version" : "1.3.224"
    }
}

I also made sure if

/usr/share/vulkan/icd.d/nvidia_icd.json

exist on my host machine, but it seems like its now a folder with not files in it, which i think is odd so the file does not exist.
Running nvidia-smi still works and gives the same results as always.

Next i tried running this container:

docker run --rm --gpus all nvidia/cuda:12.0.0-devel-ubuntu20.04 nvidia-smi

Which also worked with no problems. Now come the first problems, vkcube is not working anymore and i get this error message:

mitarbeiter@vm-gpu-robot-sim:~$ docker run --gpus all    
-e NVIDIA_DISABLE_REQUIRE=1   
-v $HOME/.Xauthority:/root/.Xauthority    
-e DISPLAY -e NVIDIA_DRIVER_CAPABILITIES=all --device /dev/dri --net host    
-v /etc/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json    
-v /etc/vulkan/implicit_layer.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json    
-v /usr/share/glvnd/egl_vendor.d/10_nvidia.json:/usr/share/glvnd/egl_vendor.d/10_nvidia.json    
-it nvidia/vulkan:1.3-470 \ 
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: " ": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled 

After that i tried running the Isaac Sim container again, with these flags:

-v /usr/share/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json \
-v /usr/share/vulkan/implicit_layer.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json \

but both files on my host /usr/share/vulkan/icd.d/nvidia_icd.json and /usr/share/vulkan/implicit_layer.d/nvidia_layers.json are now directories so i didn’t expect any success. So then i tried with these flags:

-v /etc/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json \
-v /etc/vulkan/implicit_layer.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json \

because both of these files exist.
But no matter what flags i used i still get the same results when trying to run Isaac Sim headless. Which means when trying to run ./runheadless.native.sh i still get the EULA warning with the Illegal Instrunction (core dumped) message.
And when running ./isaac-sim.headless.native.sh --allow-root i still get only Illegal Instrunction (core dumped)

I had a typo when trying to run vkcube, so it actually works like it did before the driver re-installation.

The only things that changed is that both files on my host /usr/share/vulkan/icd.d/nvidia_icd.json and /usr/share/vulkan/implicit_layer.d/nvidia_layers.json are now directories with no content inside them.
And the value of “api_version” inside /etc/vulkan/icd.d/nvidia_icd.json changed.
The Isaac Sim container is still the only one not working like before