Capture frames from camera via Python scripting


Yeah, I see, no worries :)
So here is my script running in headless mode. For the above 3 issues, the respective parts are:

The Sim signaled an error as follows:

2021-01-06 05:10:59 [6,579ms] [Warning] [omni.usd] Warning: in SdfPath at line 97 of /opt/buildagent-share/work/da639afa0455b478/USD/pxr/usd/lib/sdf/path.cpp -- Ill-formed SdfPath </0_Camera>: syntax error

2021-01-06 05:10:59 [6,579ms] [Warning] [omni.client.plugin]  Main: usd_plugin: Ill-formed SdfPath </0_Camera>: syntax error
2021-01-06 05:10:59 [6,579ms] [Error] [omni.usd.python] ErrorException: 
	Error in 'pxrInternal_v0_19__pxrReserved__::UsdStage::_IsValidPathForCreatingPrim' at line 3023 in file /opt/buildagent-share/work/da639afa0455b478/USD/pxr/usd/lib/usd/stage.cpp : 'Path must be an absolute path: <>'

(2) As seen here, I tried to make the Sim wait in L293
but still failed to catch the events in on_stage_event() in L119

(3) Camera spawning & activation

Thank you.

Hello toni,

Thank you for your answer. Following your steps, I can run the python code from Script Editor in Kit as 2#. But when I run directly from python script. It gave me this error:

Traceback (most recent call last):
File “test/”, line 5, in
import omni.usd
File “/home/dellstation/isaac-sim-2020.2.2007-linux-x86_64-release/_build/target-deps/kit_sdk_release/_build/linux-x86_64/release/plugins/bindings-python/omni/usd/”, line 1, in
from ._usd import *
ImportError: /home/dellstation/isaac-sim-2020.2.2007-linux-x86_64-release/_build/target-deps/kit_sdk_release/_build/linux-x86_64/release/plugins/bindings-python/omni/usd/…/…/…/./ undefined symbol: _ZTIN32pxrInternal_v0_19__pxrReserved__12UsdGeomGprimE

Do you have any idea about this issue? I already run the python install and source


Hi @rosy.luo

To run the code I described directly as a python script, I think it is necessary integrate it with a OmniKitHelper object (OmniKitHelper takes care of launching Kit and gives us control over an update() function we can call whenever we would like to render a new frame*). You can check the Basic Time Stepping Example to see how to create and call OmniKitHelper


Hi @Tadinu

First, respect to the name of the camera primitive, it seem to be impossible start its name with a number. The solution is easy: start the name with a letter or _ like _0_Camera.

In the other hand, I tested your code and after some tries I find how to trigger the event on_stage_event() but this method was not able to enable the flag omni.usd.StageEventType.OPENED

Then, I edited (simplified) the sample /isaac-sim/python_samples/syntheticdata/offline_dataset/ and included the event and it works. I loaded the same scenario but directly form Nuclues server. Also, you can play with the scenario_path parameter of the class constructor to provide a full path to local assets

Here is the code. I recommend you merge the other part (create the camera and access to synthetic data) of your code in this class and work with it.

import asyncio
import os
import signal

import carb
import omni
from omni.isaac.synthetic_utils import OmniKitHelper, SyntheticDataHelper
from omni.isaac.utils.scripts.nucleus_utils import find_nucleus_server

# Default rendering parameters
    "width": 600,
    "height": 600,
    "renderer": "PathTracing",
    "samples_per_pixel_per_frame": 12,
    "max_bounces": 10,
    "max_specular_transmission_bounces": 6,
    "max_volume_bounces": 4,
    "subdiv_refinement_level": 2,
    "headless": True,
    "experience": f'{os.environ["EXP_PATH"]}/isaac-sim-python.json',

class Scenario():
    def __init__(self, scenario_path=None):

        self.kit = OmniKitHelper(config=RENDER_CONFIG)
        self.sd_helper = SyntheticDataHelper()
        self.stage = self.kit.get_stage()
        self.result = True
        if scenario_path is None:
            self.result, nucleus_server = find_nucleus_server()
            if self.result is False:
                carb.log_error("Could not find nucleus server with /Isaac folder")
            self.asset_path = nucleus_server + "/Isaac"
            scenario_path = self.asset_path + "/Environments/Simple_Warehouse/warehouse.usd"
        self.scenario_path = scenario_path

        current_context = omni.usd.get_context()
        self.stage_event = current_context.get_stage_event_stream().create_subscription_to_pop(self.on_stage_event)

        self.exiting = False

        signal.signal(signal.SIGINT, self._handle_exit)

    def _handle_exit(self, *args, **kwargs):
        print("exiting dataset generation...")
        self.exiting = True

    def on_stage_event(self, in_event):
        if int(omni.usd.StageEventType.OPENED) == in_event.type:
            print('STAGE OPENED')

        elif int(omni.usd.StageEventType.CLOSED) == in_event.type:
            print('STAGE CLOSED')

        elif int(omni.usd.StageEventType.OPEN_FAILED) == in_event.type:
            print('Failed opening stage!')

        elif int(omni.usd.StageEventType.ASSETS_LOADED) == in_event.type:
            print("Stage's assets have been all loaded!")

        elif int(omni.usd.StageEventType.ASSETS_LOAD_ABORTED) == in_event.type:
            print("Stage's assets loading has been aborted!")

    async def load_stage(self, path):
        await omni.kit.asyncapi.open_stage(path)

    def _setup_world(self, scenario_path):
        # Load scenario
        setup_task = asyncio.ensure_future(self.load_stage(scenario_path))
        while not setup_task.done():
        print("stage loaded")

    def run(self):
        for i in range(1000):
            # step once and then wait for materials to load
            while self.kit.is_loading():

if __name__ == "__main__":
    s = Scenario()
1 Like


Thank you for the pointer. So it is all about using the async omni.kit.asyncapi.open_stage() instead of omni.usd.get_context().open_stage() I think.

Besides, just a note, as I observed that it looks like the event omni.usd.StageEventType.OPENED is triggered before the stage loading task is done, after the waiting loop. So creating my camera then activating it after the loop also works.
And actually, it could be also put into a callback using setup_task.add_done_callback().

Thanks a lot again for all of your continued supports!



Sorry for bugging you again…
(1) So in ISAAC_SIM/_build/linux-x86_64/release/exts/omni.isaac.synthetic_utils/omni/isaac/synthetic_utils/scripts/, I tried using the function _get_sensor_cuda_tensor() just out of curiosity, by enabling the line:

mode = 'cuda' if use_torch else 'numpy'

though the comment clearly notes that “Currently only numpy output is supported”.

I hit a segmentation fault on printing out tensor_data.data_ptr with dtype as uint8:

tensor_data = get_sensor[dtype](sensor, width, height, row_size)

Would you mind giving a confirmation?
Besides, there is an error on:

return torch_wrap.wrap_tensor(tensor_data)

due to omni.syntheticdata._syntheticdata.PyTorchTensorByte not matching the type defined in ISAAC_SIM/python_samples/torch_wrap/Py_WrapTensor.h but I managed to resolve it with some extra python binding.

(2) The reason I wanted to try using that cuda method since I happened to measure SyntheticDataHelper's get_groundtruth() (using time.perf_counter()), which normally take over 0.5 second depending on how complicated the scene is.
That time is pretty long and I guess it is attributable to using numpy as at the moment. So how much faster the ground truth extraction could be if _get_sensor_cuda_tensor() function is used would you tell?

Thank you.