Standalone examples launching failure, AWS

Hi, I’m trying to migrate from Isaac Sim 2022.1.1 to the latest 2022.2.0. I started a new AWS instance and pulled the container, and I am able to run it. The command ./ to launch the streaming application works fine, but if I try to run any standalone examples, i.e., by running:

./ ./standalone_examples/api/omni.isaac.kit/,

I get a segmentation fault. The logs and terminal output say:

2023-02-23 23:17:36 [1,964ms] [Error] [] VkResult: ERROR_EXTENSION_NOT_PRESENT
2023-02-23 23:17:36 [1,964ms] [Error] [] vkCreateInstance failed. Vulkan 1.1 is not supported, or your driver requires an update.
2023-02-23 23:17:36 [1,964ms] [Error] [] carb::graphics::createInstance failed.
2023-02-23 23:17:36 [1,964ms] [Error] [omni.gpu_foundation_factory.plugin] Failed to create GPU foundation devices!

I am using a clean setup of the most recent NVIDIA Omniverse AMI for AWS. What is causing this error? For reference, my NVIDIA driver info (from nvidia-smi) is:

| NVIDIA-SMI 525.60.11    Driver Version: 525.60.11    CUDA Version: 12.0     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|   0  NVIDIA A10G         Off  | 00000000:00:1E.0 Off |                    0 |
|  0%   23C    P0    59W / 300W |      0MiB / 23028MiB |      0%      Default |
|                               |                      |                  N/A |

Please follow instructions here:
I’ve tested that the script works.
Run the docker with:

docker run --name isaac-sim --entrypoint bash -it --gpus all -e "ACCEPT_EULA=Y" --rm --network=host \
    -v /usr/share/vulkan/icd.d/nvidia_icd.json:/etc/vulkan/icd.d/nvidia_icd.json \
    -v /usr/share/vulkan/implicit_layer.d/nvidia_layers.json:/etc/vulkan/implicit_layer.d/nvidia_layers.json \
    -v /usr/share/glvnd/egl_vendor.d/10_nvidia.json:/usr/share/glvnd/egl_vendor.d/10_nvidia.json \
    -v ~/docker/isaac-sim/cache/ov:/root/.cache/ov:rw \
    -v ~/docker/isaac-sim/cache/pip:/root/.cache/pip:rw \
    -v ~/docker/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw \
    -v ~/docker/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw \
    -v ~/docker/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw \
    -v ~/docker/isaac-sim/config:/root/.nvidia-omniverse/config:rw \
    -v ~/docker/isaac-sim/data:/root/.local/share/ov/data:rw \
    -v ~/docker/isaac-sim/documents:/root/Documents:rw \
1 Like

Hi, thanks for the quick reply! It looks like I had some issue with the docker container settings (I have converted the long command you attached into a docker-compose file instead), and also just needed to wait a very long time for the GPU shaders to compile on first run.

Appreciate the help!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.