Hi @mati-nvidia ,
thanks for the video, I’ll watch it soon completely
the documentation say’s
By default test system process (exttests.py) reads all [[python.module]] entries from the tested extension and searches for tests in each of them. You can override it by explicitly setting where to look for tests:
but there is already a depedency so not sure if it does look than at the python.module !?
I have added omni.usd in the extension.toml to the test section.
[[test]]
# Extra dependencies only to be used during test run
dependencies = [
"omni.kit.ui_test", # UI testing extension
"omni.usd"
]
and that error is gone.
However I now get an error when doing a simple UsdGeom.Cube.Define.
Do I have to think about all depencies for the package tree when doing tests ?
I have added the log and the package which is basically the default extension/test test.zip (1.0 MB)
You should most likely add omni.usd as a regular dependency for your extension. Not a test-only dependency. I’m doing developer office hour in about 40 mins. If you’re around we can chat about it then: NVIDIA Omniverse
Like a was saying in the livestream, tests run in completely stripped down instances of Kit. You need to create a stage in your test setUp before your API can get the stage. The tests in omni.usd are good examples to learn from.
Your assert was incorrectly formed. Your API returns the integer 1 and that’s what you should be comparing with.
thanks for that. It looked promissing as you are absolutly right the result is a number not the label text.
Unfortunatetly it still did not work for me on two machines.
On the first one with RTX card the Scene view even didn’t open and was your zip file content
On the second one with P600 I have just incorporated your changes. There the scene view opened but did also fail.
I really can’t find in both logs what the issue is. results.zip (37.8 KB)
Based on the logs. Looks like machine2 worked. What made you think it didn’t work? I believe the tests are supposed to run headless if that’s maybe the confusion.
Another sanity check is to run the tests from the Extension Manager. Choose your extension and follow these steps:
Hi @volker.kuehn. I think it’s going to be the hardware. The P600 is not an RTX-card. Can you share details about the RTX card? I didn’t see it in the log.
It failed with a hard crash so it’d be good to get the the crash dumps too. They’re found in a location like: C:\Users\<username>\AppData\Local\ov\data\Kit\<app>\<version>\crash_2023-03-19_16-58-03_36844.txt
sorry for the delays always. Not laisy just a different time zone.
Machine one is P600 so no RTX card with create 2022.3.3 (fail in GUI)
Machine two is RTX3080 with Create 2022.3.3 (fail in GUI)
Machine three is A100 in a VM on an OVX with Create 2022.3.1 (success in GUI)
retried this morning and none created a new crash dump.
Not sure if I can force that or run in the console without starting create first.
Any chance I could directly talk to the developer ?
In the meantime I found some ways to run tests headlessly but how would I run a test headlessly for an extension ? Or a list of extensions which is not all ?
Hi @volker.kuehn. I’ve gone ahead and started the process to escalate the issue based on the information that you provided. As for your other questions:
I believe the tests run headlessly if you execute them from commandline. You can have a look at and run any from the test-*.bat in the kit installation.
There are a couple of way that you can do that. In the batch files, you’ll notice that the extension under test is specified by: --/exts/omni.kit.test/testExts/0='omni.kit.usd_undo'. This is a list setting so you could add a flag for each extension you want to run in that batch (e.g. --/exts/omni.kit.test/testExts/1='omni.timeline') You could specify the same in a kit file. Here’s an example based off of omni.app.test_ext_kit_sdk.kit:
omni.app.test_ext_kit_sdk_mati.kit
[package]
title = "Extension Test Run Environment Used in Kit Repo"
version = "1.0.0"
keywords = ["app"]
[dependencies]
# Uncomment to enable python debugger
# "omni.kit.debug.python" = {}
[settings]
# Wait for native debugger to connect
# app.waitForDebugger = true
[settings.exts."omni.kit.debug.python"]
# Host and port for listen to debugger for
# host = "127.0.0.1"
# port = 3000
# Block until client (debugger) connected
# waitForClient = true
# break immediately (also waits for client)
# break = true
[settings.exts."omni.kit.test"]
# Run only selected tests, wildcards supported
# runTestsFilter = "*test name here*"
testExts = [
"omni.timeline",
"omni.kit.usd_undo"
]
# Test Settings overrides used only in kit repo. Applied for ALL extension tests.
[settings]
app.menu.legacy_mode = false
# Make sure extensions doesn't pull anything accidentally from downstream repos (circular dependency prevention)
app.extensions.registryEnabled = false
# Enable test coverage when running tests in test suite
exts."omni.kit.test".pyCoverageEnabled= true