Run tests for python files not in


with this code I can run the add_cube method in which provides the Toolchain class in the script editor

import omni.kit.ui_test as ui_test
from import Toolchain
toolchain = Toolchain()
_count = toolchain.add_cube()

when I run a test like that

import omni.kit.test
import omni.kit.ui_test as ui_test

from import Toolchain
toolchain = Toolchain()

class Test(omni.kit.test.AsyncTestCase):
    async def setUp(self):

    async def tearDown(self):

    async def test_api_add_cube(self):
        _count = toolchain.add_cube()
        self.assertEqual(_count, 1)

it fails with

 No module named 'omni.usd'

why is that ?
do I need special preparation to do function tests in files not called ?

Hi @volker.kuehn. You can check out this video that I made about automated testing. I don’t cover omni.kit.ui_test specifically, but the same principles should apply. Automated Testing - NVIDIA Omniverse Kit Extensions - YouTube

Without looking at more code, my guess is that your extension needs to properly declare omni.usd as a dependency. Also note that you can have test-only dependencies: Testing Extensions with Python — kit-manual 104.0 documentation

Hi @mati-nvidia ,
thanks for the video, I’ll watch it soon completely

the documentation say’s

By default test system process ( reads all [[python.module]] entries from the tested extension and searches for tests in each of them. You can override it by explicitly setting where to look for tests:

but there is already a depedency so not sure if it does look than at the python.module !?
I have added omni.usd in the extension.toml to the test section.

# Extra dependencies only to be used during test run
dependencies = [
    "omni.kit.ui_test", # UI testing extension

and that error is gone.
However I now get an error when doing a simple UsdGeom.Cube.Define.

Do I have to think about all depencies for the package tree when doing tests ?

I have added the log and the package which is basically the default extension/test (1.0 MB)

You should most likely add omni.usd as a regular dependency for your extension. Not a test-only dependency. I’m doing developer office hour in about 40 mins. If you’re around we can chat about it then: NVIDIA Omniverse

Hi @volker.kuehn. Thanks for the example. I saw a few problems.

  1. I moved omni.usd as a dependency instead of test-only dependency because your extension relies on it.
  2. Like a was saying in the livestream, tests run in completely stripped down instances of Kit. You need to create a stage in your test setUp before your API can get the stage. The tests in omni.usd are good examples to learn from.
  3. Your assert was incorrectly formed. Your API returns the integer 1 and that’s what you should be comparing with.

I’ve attached your test extension with the fixes. (68.0 KB)

Hi @mati-nvidia ,

thanks for that. It looked promissing as you are absolutly right the result is a number not the label text.

Unfortunatetly it still did not work for me on two machines.
On the first one with RTX card the Scene view even didn’t open and was your zip file content
On the second one with P600 I have just incorporated your changes. There the scene view opened but did also fail.

I really can’t find in both logs what the issue is. (37.8 KB)

Based on the logs. Looks like machine2 worked. What made you think it didn’t work? I believe the tests are supposed to run headless if that’s maybe the confusion.

Another sanity check is to run the tests from the Extension Manager. Choose your extension and follow these steps:

Hi @mati-nvidia ,

right now I only did start the tests always through the Create 2022.3.3 UI.

Only on machine 2 it shows the ui from the during the testrun


As a result it show s always the Warning symbol even on machine 2


At a later stage fully headless testing would be great. What explains the warning ?


in the mean time I have been able to run the exact same code on a third machine which did show.


That is a VM on an OVX. A bit strange that only there I get the green hook.

Hi @volker.kuehn. I think it’s going to be the hardware. The P600 is not an RTX-card. Can you share details about the RTX card? I didn’t see it in the log.

It failed with a hard crash so it’d be good to get the the crash dumps too. They’re found in a location like:

Hi @mati-nvidia ,

sorry for the delays always. Not laisy just a different time zone.

Machine one is P600 so no RTX card with create 2022.3.3 (fail in GUI)
Machine two is RTX3080 with Create 2022.3.3 (fail in GUI)
Machine three is A100 in a VM on an OVX with Create 2022.3.1 (success in GUI)

retried this morning and none created a new crash dump.
Not sure if I can force that or run in the console without starting create first.

Thanks @volker.kuehn. I’ve created an internal issue, OM-90487, so we can get a developer to help figure out what’s going on here.

Thanks a lot @mati-nvidia, let me know if there is anything I could do.

Hi @mati-nvidia , are there any news ?

Any chance I could directly talk to the developer ?

In the meantime I found some ways to run tests headlessly but how would I run a test headlessly for an extension ? Or a list of extensions which is not all ?


Hi @volker.kuehn. I’ve gone ahead and started the process to escalate the issue based on the information that you provided. As for your other questions:

  1. I believe the tests run headlessly if you execute them from commandline. You can have a look at and run any from the test-*.bat in the kit installation.
  2. There are a couple of way that you can do that. In the batch files, you’ll notice that the extension under test is specified by: --/exts/omni.kit.test/testExts/0='omni.kit.usd_undo'. This is a list setting so you could add a flag for each extension you want to run in that batch (e.g. --/exts/omni.kit.test/testExts/1='omni.timeline') You could specify the same in a kit file. Here’s an example based off of

title = "Extension Test Run Environment Used in Kit Repo"
version = "1.0.0"
keywords = ["app"]

# Uncomment to enable python debugger
# "omni.kit.debug.python" = {}

# Wait for native debugger to connect
# app.waitForDebugger = true

# Host and port for listen to debugger for
# host = ""
# port = 3000

# Block until client (debugger) connected
# waitForClient = true

# break immediately (also waits for client)
# break = true

# Run only selected tests, wildcards supported
# runTestsFilter = "*test name here*"
testExts = [

# Test Settings overrides used only in kit repo. Applied for ALL extension tests.
[settings] = false

# Make sure extensions doesn't pull anything accidentally from downstream repos (circular dependency prevention)
app.extensions.registryEnabled = false

# Enable test coverage when running tests in test suite
exts."omni.kit.test".pyCoverageEnabled= true


@echo off
call "%~dp0\kit.exe"  apps/ --enable omni.kit.test --/app/enableStdoutOutput=0 --ext-folder "%~dp0/exts"  --ext-folder "%~dp0/apps"  --/exts/omni.kit.test/testExtOutputPath="%~dp0/../../../_testoutput"  --portable-root "%~dp0/"  --/telemetry/mode=test %*

Internally, we have a repo tool for testing that actually just runs the batch files in succession when we want to test multiple extensions.

Hi @mati-nvidia ,

a greatly appreciated input! I needed a bit to understand it but now I am able to run my tests headlessly at least on the ovx and docker images.

Kind regards

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.