Build omniverse kit-app-template in my linux cli workstation, which has no monitor

Operating System:
Windows
Linux
Kit Version:
110 (Kit App Template)
109 (Kit App Template)
108 (Kit App Template)
107 (Kit App Template)
106 (Kit App Template)
105 (Launcher)
Kit Template:
USD Composer
USD Explorer
USD Viewer
Custom
GPU Hardware:
A series (Blackwell)
A series (ADA)
A series
50 series
40 series
30 series
GPU Driver:
Latest
Recommended (573.xx)
Other
**Work Flow:**Building and running NVIDIA Omniverse Kit SDK (kit-app-template, main branch, Kit 110.0.0) on a headless Ubuntu 24.04.3 LTS server with an NVIDIA A100-PCIE-40GB GPU (no display connected). The goal is to run Omniverse Kit headlessly with --no-window streaming mode for a digital twin application
**Main Issue:**Every ./repo.sh build after creating an app via ./repo.sh template new crashes with a SIGSEGV (exit code -11) during the precache_exts step. Kit crashes within 1 second of startup, always at the same point — omni.kit.async_engine startup — and always with the identical backtrace.
Root Cause Identified:
Kit 110 bundles its own Python 3.12 built against manylinux_2_35 (glibc 2.35 = Ubuntu 22.04). Ubuntu 24.04 has glibc 2.39. When Kit’s Python initialises, it searches /usr/lib/python3.12/lib-dynload/ (hardcoded into the Python build) and loads the system’s _asyncio.cpython-312-x86_64-linux-gnu.so — which was compiled against glibc 2.39. This causes an ABI mismatch with Kit’s own libpython3.12.so.1.0, resulting in an immediate SIGSEGV.
Reproduction Steps:
**Error Code:

Im trying to use my workstation for kit-app-template ,most of my workflows ran out of memory in my laptop. So i wanna use my workstation for compute and stream it in my laptop.
Is there any workarounds so that i can build the kit-app-template in my workstation and stream it my laptop using webrtc or other streaming?**

The first thing to say is that Omniverse does not run on an A100. That GPU does not support RTX so it cannot run. The second thing to say is, you “can” run it headless, but you have to add the correct flag for it. You also need to have a full GUI driver installed as if you have a monitor present. Kit has to “think” there is a monitor there. You cannot run it on a system that is not set up for a GUI or a monitor.

By the way, your error code is specific to a memory access violation for the assumed container you are running.

SIGSEGV is the POSIX signal name for a segmentation fault, and when a process exits with code -11 it usually means it was terminated by this signal on Unix-like systems.

  • A segmentation fault happens when a program tries to read or write memory that it is not allowed to access (e.g., dereferencing a bad/null pointer, accessing freed memory, or going out of bounds on an array).
  • The kernel sends signal number 11 (SIGSEGV) to the process, and many shells report this as “killed by signal 11” or as an exit code of 128 + 11 = 139; some tools shorthand it as -11.
  • Fixing it typically involves running under a debugger (like gdb, cuda-gdb, or your IDE’s debugger), getting a backtrace at the crash point, and then checking for invalid pointer/array/memory usage at that location.

Refer to the docs - https://docs.omniverse.nvidia.com/dev-guide/latest/common/technical-requirements.html

A100 is called out as may be possible but without support guarantees.

Thank you Richard for your explanation, i spun up a Ec2 instance running omniverse . There is less chances for me to debug the error in the workstation, cause file permission were set in place for some protection of data, i dont know if my organization would allow me to toggle with that or not. If i wanna debug means which places should i look for exactly?

You mean debug your local workstation or the EC2 instance? How is the EC2 instance working? It is running Omniverse ok?

As stated, you will probably not get the A100 running because of the lack of RTX support, even if you solve the memory access issue. You can technically “run” omniverse kit of a wide variety of GPUs, and it will open, but you will just get a dead black screen in the RTX viewport, if the GPU does not support RTX, which the A100 does not.

I would stick with the EC2 instance or get a new local workstation with a more powerful card. A GeForce card would also work fine if you get a powerful version above a 3080. For the workstation cards, you would want an A6000.

@Richard3D I have started using the Kit-app-template in the EC2 instance powered by L40s gpu , its running good . I’ve been testing the cloudxr lovxr example and the client sample kit in the ec2 instance as using quest 3.

Great, glad to hear.