Thanks for trying. I’m not sure if the changes to our recent container is causing your issues.
Can you try running a couple older versions of the Isaac Sim container?
Thanks for trying. I’m not sure if the changes to our recent container is causing your issues.
Can you try running a couple older versions of the Isaac Sim container?
I tried the versions 2022.1.0 and 2022.1.1 of the Isaac Sim container but i still get Illegal instruction (core dumped), here is an example from the 2022.1.0 version:
mitarbeiter@vm-gpu-robot-sim:~$ docker run --name isaac-sim --entrypoint bash -it --gpus all -e "ACCEPT_EULA=Y" --rm --network=host -v /etc/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json -v /etc/vulkan/implicit_layer.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json -v /usr/share/glvnd/egl_vendor.d/10_nvidia.json:/usr/share/glvnd/egl_vendor.d/10_nvidia.json -v ~/docker/isaac-sim/cache/ov:/root/.cache/ov:rw -v ~/docker/isaac-sim/cache/pip:/root/.cache/pip:rw -v ~/docker/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw -v ~/docker/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw -v ~/docker/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw -v ~/docker/isaac-sim/config:/root/.nvidia-omniverse/config:rw -v ~/docker/isaac-sim/data:/root/.local/share/ov/data:rw -v ~/docker/isaac-sim/documents:/root/Documents:rw nvcr.io/nvidia/isaac-sim:2022.1.0
root@vm-gpu-robot-sim:/isaac-sim# ./runheadless.native.sh
The NVIDIA Omniverse License Agreement (EULA) must be accepted before
Omniverse Kit can start. The license terms for this product can be viewed at
https://docs.omniverse.nvidia.com/app_isaacsim/common/NVIDIA_Omniverse_License_Agreement.html
libGLX_nvidia.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.0
libGLX_nvidia.so.0 (libc6) => /usr/lib/i386-linux-gnu/libGLX_nvidia.so.0
Writing disposable ICD file (/tmp/tmp_icd_HgpUQQ.json)...
GPU0
apiVersion = 1.3.224
driverVersion = 525.60.11
vendorID = 0x10de
deviceID = 0x1e04
deviceName = NVIDIA GeForce RTX 2080 Ti
GPU1
apiVersion = 1.3.224
driverVersion = 525.60.11
vendorID = 0x10de
deviceID = 0x1e04
deviceName = NVIDIA GeForce RTX 2080 Ti
Writing ICD file to (/etc/vulkan/icd.d/nvidia_icd.json)
Illegal instruction (core dumped)
root@vm-gpu-robot-sim:/isaac-sim# ./isaac-sim.headless.native.sh --allow-root
Illegal instruction (core dumped)
I wanted to try the version older then 2022 but i don’t know how to run Isaac Sim headless there
Yes the issue still remains @rthaker.
I recently assumed that maybe the docker default network configuration might be a problem because of past issues with docker containers, but that also didn’t help
Hi. Were you able to resolve the issues installing Launcher? Can you run Create and Code natively from the Launcher?
No i was not able to resolve the issue with installing the launcher.
When i run ./omniverse-launcher-linux.AppImage
i still get the same issue as illustrated in my original post.
Therefore i can’t run Create and Code.
Hi. Have you tried deleting the folders again below and reinstall the drivers?
/etc/vulkan/
/usr/share/vulkan/
On my repo I have a script under the dev tree docker folder.
You can use and adapt to install your Isaac Sim Container.
Hey, yes i deleted the folders and did a fresh reinstall with the provided driver, but i still get:
Illegal instruction (core dumped)
Unfortunately, i wasn’t able to resolve the issue yet, there is also no VNC server present anymore on the machine so i don’t know what could cause this.
It could be some VM settings, but i didn’t setup this machine it was done by someone else
Maybe, you could add “–ignore-gpu-blocklist” while you run the AppImage, I guess. When I encounter the same error, so I reinstall the gpu driver according the omniverse documentation and add above startup option, then it works for me.
I know this issue is kind of old. But just in case this helps someone else I had the exact same issue in which every time I ran the headless Issac Sim (native, webrtc, whatever) it would error out with “Illegal instruction (core dumped)”. My drivers were all installed correctly, I tried different versions but nothing worked.
What I realized was that I had my VM set up in Proxmox to use the “kvm64” architecture as a pass-through to the VM. If I changed this to just use the “host” CPU arch then everything worked perfectly.