Initializing a writer significantly reduces FPS of simulation

I have a simulation with a robot that moves forward. If i enable my extension (which initializes a custom writer in the startup function) and run my simulation, it is very notisible that the simulation runs much much slower (like 10 fps intead of 60). what could cause this? the write function of my custom writer is empty. Thank you.

Hi there,

Before you are initializing the writer you create a render product, depending on its resolution and the scene complexity this can eat up a lot of computation, thus making the FPS drop.

Another computation happens when attaching the writer to the render product, this activates the selected annotators and adds up to the computations.

The write function should not actually cause any FPS drops only if the hard drive cannot keep up with the amount of data being written, in which case you use this to optimise:

If you are writing the data sparsely in your scenario it can also make sense activating and deactivating the render product, similarly to this example (using the use_temp_rp flag)

Here is also a python script that you can run in the script editor giving you an idea how creating the render product / attaching the writer influences the gpu load:

import asyncio
import os
import subprocess

import omni.kit.app
import omni.kit.viewport.utility
import omni.replicator.core as rep

SLEEP_TIME = 0.5
NUM_READS = 8

def get_gpu_used_memory_and_load():
    """Return used GPU memory in MB and GPU load in %"""
    mem_used = 0
    gpu_load = 0

    command = "nvidia-smi"
    arg_query = "--query-gpu=memory.used,utilization.gpu"
    arg_format = "--format=csv,noheader,nounits"

    try:
        p = subprocess.Popen([command, arg_query, arg_format], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        output, err = p.communicate()
        output_lines = [m for m in output.decode("ascii").split("\n") if m]
        mem_used = sum([int(line.split(',')[0]) for line in output_lines])
        gpu_load = sum([int(line.split(',')[1]) for line in output_lines]) / len(output_lines)
    except (OSError, ValueError, FileNotFoundError) as e:
        print("Error trying to run nvidia-smi")
        print(e)

    return mem_used, gpu_load

cube = rep.create.cube(semantics=[("class", "cube")], count=10)
viewport_api = omni.kit.viewport.utility.get_active_viewport()

async def run_example_async():
    # Warmup, make sure scene is fully loaded
    for _ in range(20):
        await omni.kit.app.get_app().next_update_async()

    print("Baseline:")
    for i in range(NUM_READS):
        await omni.kit.app.get_app().next_update_async()
        if SLEEP_TIME > 0:
            await asyncio.sleep(SLEEP_TIME)
        mem_used, gpu_load = get_gpu_used_memory_and_load()
        print(f"\t[{i}] GPU memory used: {mem_used} MB, GPU load: {gpu_load}%, FPS: {viewport_api.fps:0.2f}")
    

    print("Render product created:")
    rp = rep.create.render_product("/OmniverseKit_Persp", (2000, 2000))
    for i in range(NUM_READS):
        await omni.kit.app.get_app().next_update_async()
        if SLEEP_TIME > 0:
            await asyncio.sleep(SLEEP_TIME)
        mem_used, gpu_load = get_gpu_used_memory_and_load()
        print(f"\t[{i}] GPU memory used: {mem_used} MB, GPU load: {gpu_load}%, FPS: {viewport_api.fps:0.2f}")


    print(f"Writer created and initialized:")
    writer = rep.WriterRegistry.get("BasicWriter")
    output_directory = os.getcwd() + "/_out_load_test"
    writer.initialize(output_dir=output_directory, rgb=True, semantic_segmentation=True, bounding_box_3d=True)
    for i in range(NUM_READS):
        await omni.kit.app.get_app().next_update_async()
        if SLEEP_TIME > 0:
            await asyncio.sleep(SLEEP_TIME)
        mem_used, gpu_load = get_gpu_used_memory_and_load()
        print(f"\t[{i}] GPU memory used: {mem_used} MB, GPU load: {gpu_load}%, FPS: {viewport_api.fps:0.2f}")


    print(f"Writer attached to render product:")
    writer.attach(rp)
    for i in range(NUM_READS):
        await omni.kit.app.get_app().next_update_async()
        if SLEEP_TIME > 0:
            await asyncio.sleep(SLEEP_TIME)
        mem_used, gpu_load = get_gpu_used_memory_and_load()
        print(f"\t[{i}] GPU memory used: {mem_used} MB, GPU load: {gpu_load}%, FPS: {viewport_api.fps:0.2f}")


    print(f"Writer detached from render product:")
    writer.detach()
    for i in range(NUM_READS):
        await omni.kit.app.get_app().next_update_async()
        if SLEEP_TIME > 0:
            await asyncio.sleep(SLEEP_TIME)
        mem_used, gpu_load = get_gpu_used_memory_and_load()
        print(f"\t[{i}] GPU memory used: {mem_used} MB, GPU load: {gpu_load}%, FPS: {viewport_api.fps:0.2f}")


    print(f"Render product updates disabled:")
    rp.hydra_texture.set_updates_enabled(False)
    for i in range(NUM_READS):
        await omni.kit.app.get_app().next_update_async()
        if SLEEP_TIME > 0:
            await asyncio.sleep(SLEEP_TIME)
        mem_used, gpu_load = get_gpu_used_memory_and_load()
        print(f"\t[{i}] GPU memory used: {mem_used} MB, GPU load: {gpu_load}%, FPS: {viewport_api.fps:0.2f}")


    print(f"Render product destroyed:")
    rp.destroy()
    for i in range(NUM_READS):
        await omni.kit.app.get_app().next_update_async()
        if SLEEP_TIME > 0:
            await asyncio.sleep(SLEEP_TIME)
        mem_used, gpu_load = get_gpu_used_memory_and_load()
        print(f"\t[{i}] GPU memory used: {mem_used} MB, GPU load: {gpu_load}%, FPS: {viewport_api.fps:0.2f}")


asyncio.ensure_future(run_example_async())