Multi processing in Isaac sim

Hello, I have a robot in a scene and I collect data from the robot in the same way that our real robot collects data; from three different sensors at different rates.

When using asyncio.ensure_future, our system significantly slows down. It is not a viable solution.

What we need is multiprocessing where each sensor will collects data (images, imu, and odometry) at different rates (each can go up to 100 fps).

The whole system is up and running but it is not viable with the current asynchronous methods. How can i run each signal using a different process?

Another issue with async is; at the end of the simulation, I want to save all of the data collected, so I call asyncio.ensure_future(self.Stop())

We have tried using multi processing:

The Issue is, it completely freezes the main UI thread. The editor freezes untill

self.data_generator.save_data_to_system()

is done. So saving the data also needs to happen in a process which is different from the main omniverse process.

How can I do this? I’ve tried a simple process creation (just for testing) but it does not work:

(it does work if I run this code in the main process)

Thank you for any input

Hi there,

I have a few questions about your setup, are you running the script in the script editor or as a standalone application?

Is realtime factor needed in the simulation, is it feeding the data somewhere that it expects it to run at a given framerate? Or the sensors need to maintain a relative update rate between them? Because if it ends up being written to disk the realtime factor is not needed.

For accessing the sensor data are you using replicator writers or the annotators directly, or a different approach?

Hi @ahaidu , I am running the script in a custom extension.

The real time factor is needed in the sense that foreach sensor, I save data (in a queue) x times per second (including the time stemp for when that data was saved). I save the data to disk at the end of the simulation. But, in the near future I will need to send this data in realtime (rather then just collecting it and saving to disk at the end of the simulation).

one of the signals is a camera signal which gets the rgb, segmented, and depth arrays with annotators directly.

I have here script editor example snippets on how to write/access annotator data at custom framerates, can you check if it works for your use case?

  • Using writers:
import asyncio
import os

import carb.settings
import omni.kit.app
import omni.replicator.core as rep
import omni.timeline
import omni.usd

NUM_APP_FRAMES = 50
TIMELINE_FPS = 60.0
FPS_LOW = 7.0
FPS_HI = 12.0
DT_LOW = 1.0 / FPS_LOW
DT_HI = 1.0 / FPS_HI
# If True, render products are only enabled when needed (i.e. when data is accessed)
TOGGLE_RP_UPDATES = True


async def run_fps_example_async():
    omni.usd.get_context().new_stage()
    carb.settings.get_settings().set("/omni/replicator/captureOnPlay", False)
    rep.create.cube(semantics=[("class", "cube")])

    rp = rep.create.render_product("/OmniverseKit_Persp", (512, 512), name="rp")
    if TOGGLE_RP_UPDATES:
        rp.hydra_texture.set_updates_enabled(False)

    # Create writers
    out_dir_rgb = os.getcwd() + "/_out_writer_fps_rgb"
    writer_rgb = rep.WriterRegistry.get("BasicWriter")
    writer_rgb.initialize(output_dir=out_dir_rgb, rgb=True)
    writer_rgb.attach(rp, trigger=None)  # NOTE: trigger=None is needed for writer schedule triggering

    out_dir_depth = os.getcwd() + "/_out_writer_fps_depth"
    writer_depth = rep.WriterRegistry.get("BasicWriter")
    writer_depth.initialize(output_dir=out_dir_depth, distance_to_camera=True)
    writer_depth.attach(rp, trigger=None)  # NOTE: trigger=None is needed for writer schedule triggering
    print(f"Writer data will be written to: {out_dir_rgb} and {out_dir_depth}")

    # Set the timeline parameters to fit the deisred scenario
    timeline = omni.timeline.get_timeline_interface()
    timeline.set_play_every_frame(True)
    timeline.set_auto_update(True)
    timeline.set_looping(False)
    timeline.set_current_time(0.0)
    timeline.set_end_time(1000)
    timeline.set_target_framerate(TIMELINE_FPS)
    timeline.play()

    # Test the timeline framerate by advancing it by the frame rate number of frames (i.e. 1 second of time)
    print(f"Initial timeline time: {timeline.get_current_time():.4f}")
    previous_time = timeline.get_current_time()
    for i in range(int(TIMELINE_FPS)):
        await omni.kit.app.get_app().next_update_async()
        print(f"[{i}][{timeline.get_current_time():.4f}] frame dt={(timeline.get_current_time() - previous_time):.4f}")
        previous_time = timeline.get_current_time()
    print(f"After {int(TIMELINE_FPS)} frames, the timeline time is: {timeline.get_current_time():.4f}")

    # Access annotator data at different framerates
    previous_time = timeline.get_current_time()
    elapsed_time_fps_low = 0.0
    elapsed_time_fps_hi = 0.0
    for i in range(NUM_APP_FRAMES):
        current_time = timeline.get_current_time()
        delta_time = current_time - previous_time
        elapsed_time_fps_low += delta_time
        elapsed_time_fps_hi += delta_time
        print(
            f"[{i}][{current_time:.4f}] elapsed_time_fps_low={elapsed_time_fps_low:.4f}/{DT_LOW:.4f},\t elapsed_time_fps_hi={elapsed_time_fps_hi:.4f}/{DT_HI:.4f},\t dt={delta_time:.4f};"
        )

        should_trigger_fps_low = elapsed_time_fps_low >= DT_LOW
        should_trigger_fps_hi = elapsed_time_fps_hi >= DT_HI
        if should_trigger_fps_low or should_trigger_fps_hi:
            # Enable render products for data access
            if TOGGLE_RP_UPDATES:
                rp.hydra_texture.set_updates_enabled(True)

            # Access data directly from annotators
            if should_trigger_fps_low:
                # Difference to the optimal trigger time (if the timeline framerate is not divisible by the sensor framerate)
                diff = elapsed_time_fps_low - DT_LOW
                print(f"\t[fps low] data shape: writer_rgb.schedule_write(), diff={diff:.4f}")
                writer_rgb.schedule_write()
                # Carry over the difference to the next trigger time
                elapsed_time_fps_low = diff
                # OR: elapsed_time_fps_low = 0.0 for a more simple reset

            if should_trigger_fps_hi:
                # Difference to the optimal trigger time (if the timeline framerate is not divisible by the sensor framerate)
                diff = elapsed_time_fps_hi - DT_HI
                print(f"\t[fps hi] data shape: writer_depth.schedule_write(); diff={diff:.4f}")
                writer_depth.schedule_write()
                # Carry over the difference to the next trigger time
                elapsed_time_fps_hi = diff
                # OR: elapsed_time_fps_hi = 0.0 for a more simple reset

            # Step needs to be called after scheduling the write
            await rep.orchestrator.step_async()

            # Disable render products for performance reasons until the next trigger time
            if TOGGLE_RP_UPDATES:
                rp.hydra_texture.set_updates_enabled(False)

            # Restart the timeline if it has been paused by the replicator step function
            if not timeline.is_playing():
                timeline.play()

        previous_time = current_time
        # Advance the app (timeline) by one frame
        await omni.kit.app.get_app().next_update_async()


asyncio.ensure_future(run_fps_example_async())


# NOTE:
# - To avoid FPS delta misses make sure the sensor framerate is divisible by the timeline framerate
  • using annotators
import asyncio

import carb.settings
import omni.kit.app
import omni.replicator.core as rep
import omni.timeline
import omni.usd

NUM_APP_FRAMES = 50
TIMELINE_FPS = 60.0
FPS_LOW = 7.0
FPS_HI = 12.0
DT_LOW = 1.0 / FPS_LOW
DT_HI = 1.0 / FPS_HI
# If True, render products are only enabled when needed (i.e. when data is accessed)
TOGGLE_RP_UPDATES = True


async def run_fps_example_async():
    omni.usd.get_context().new_stage()
    carb.settings.get_settings().set("/omni/replicator/captureOnPlay", False)
    rep.create.cube(semantics=[("class", "cube")])

    rp = rep.create.render_product("/OmniverseKit_Persp", (512, 512), name="rp")
    if TOGGLE_RP_UPDATES:
        rp.hydra_texture.set_updates_enabled(False)

    # Use annotators to access the data directly at different framerates
    annot_rgb = rep.AnnotatorRegistry.get_annotator("rgb")
    annot_rgb.attach(rp)
    annot_depth = rep.AnnotatorRegistry.get_annotator("distance_to_camera")
    annot_depth.attach(rp)

    # Set the timeline parameters to fit the deisred scenario
    timeline = omni.timeline.get_timeline_interface()
    timeline.set_play_every_frame(True)
    timeline.set_auto_update(True)
    timeline.set_looping(False)
    timeline.set_current_time(0.0)
    timeline.set_end_time(1000)
    timeline.set_target_framerate(TIMELINE_FPS)
    timeline.play()

    # Test the timeline framerate by advancing it by the frame rate number of frames (i.e. 1 second of time)
    print(f"Initial timeline time: {timeline.get_current_time():.4f}")
    previous_time = timeline.get_current_time()
    for i in range(int(TIMELINE_FPS)):
        await omni.kit.app.get_app().next_update_async()
        print(f"[{i}][{timeline.get_current_time():.4f}] frame dt={(timeline.get_current_time() - previous_time):.4f}")
        previous_time = timeline.get_current_time()
    print(f"After {int(TIMELINE_FPS)} frames, the timeline time is: {timeline.get_current_time():.4f}")

    # Access annotator data at different framerates
    previous_time = timeline.get_current_time()
    elapsed_time_fps_low = 0.0
    elapsed_time_fps_hi = 0.0
    for i in range(NUM_APP_FRAMES):
        current_time = timeline.get_current_time()
        delta_time = current_time - previous_time
        elapsed_time_fps_low += delta_time
        elapsed_time_fps_hi += delta_time
        print(
            f"[{i}][{current_time:.4f}] elapsed_time_fps_low={elapsed_time_fps_low:.4f}/{DT_LOW:.4f},\t elapsed_time_fps_hi={elapsed_time_fps_hi:.4f}/{DT_HI:.4f},\t dt={delta_time:.4f};"
        )

        should_trigger_fps_low = elapsed_time_fps_low >= DT_LOW
        should_trigger_fps_hi = elapsed_time_fps_hi >= DT_HI
        if should_trigger_fps_low or should_trigger_fps_hi:
            # Enable render products for data access
            if TOGGLE_RP_UPDATES:
                rp.hydra_texture.set_updates_enabled(True)

            # Step replicator to feed annotators with new data
            await rep.orchestrator.step_async()

            # Access data directly from annotators
            if should_trigger_fps_low:
                # Difference to the optimal trigger time (if the timeline framerate is not divisible by the sensor framerate)
                diff = elapsed_time_fps_low - DT_LOW
                print(f"\t[fps low] data shape: {annot_rgb.get_data().shape}), diff={diff:.4f}")
                # Carry over the difference to the next trigger time
                elapsed_time_fps_low = diff
                # OR: elapsed_time_fps_low = 0.0 for a more simple reset

            if should_trigger_fps_hi:
                # Difference to the optimal trigger time (if the timeline framerate is not divisible by the sensor framerate)
                diff = elapsed_time_fps_hi - DT_HI
                print(f"\t[fps hi] data shape: {annot_depth.get_data().shape}); diff={diff:.4f}")
                # Carry over the difference to the next trigger time
                elapsed_time_fps_hi = diff
                # OR: elapsed_time_fps_hi = 0.0 for a more simple reset

            # Disable render products for performance reasons until the next trigger time
            if TOGGLE_RP_UPDATES:
                rp.hydra_texture.set_updates_enabled(False)

            # Restart the timeline if it has been paused by the replicator step function
            if not timeline.is_playing():
                timeline.play()

        previous_time = current_time
        # Advance the app (timeline) by one frame
        await omni.kit.app.get_app().next_update_async()


asyncio.ensure_future(run_fps_example_async())


# NOTE:
# - To avoid FPS delta misses make sure the sensor framerate is divisible by the timeline framerate