Difference between two similar distance_to_camera images

Hi,
I am using Isaac Sim for generating synthetic data of distance_to_camera(depth) images along with RGB images. My simulation contains humans and some falling objects. I take difference image of the first depth image and all the next image to obtain the change image.

But I observe change in depth image at positions where there is no movement of any objects. Is this intended or any way to resolve this?

ref_rgb
reference rgb image
current_img_rgb
Current image

diff_img
Difference image
The differences can be seen as cluster of dots in diff_img.

  • For movement the AGV is lifted to some height as seen in the current image

Hi there,

can you go over a bit of details on how you are accessing the annotator data? Could it be that the timeline is still running between the captures, hence the changes?

Hi, I am using the Isaac Sim synthetic data recorder to specify the parametrs like camera, image size, etc. and using the start button with fixed number of frames to record the data. During this there is no other simulation being run.

Hi @ahaidu , Is there any other information you would like to know about the process or any specific details of image processing I am using?

Have you checked if the issues happens in different scenarios as well? This way can rule out if the issue somehow coupled with the syntehtic data recorder somehow advancing the timeline.

Another approach would by to try accessing the data either directly from an annotator or through a basic writer. Here is a script editor example for this:

import asyncio
import os

import numpy as np
import omni.replicator.core as rep
from PIL import Image

NUM_FRAMES = 5

def save_rgb(rgb_data, file_name):
    rgb_img = Image.fromarray(rgb_data, "RGBA")
    rgb_img.save(file_name + ".png")

cube = rep.create.cube()
camera = rep.create.camera(position=(0, 0, 5), look_at=cube)
render_product = rep.create.render_product(camera, (512, 512))

output_directory = os.getcwd() + "/_out_rgb"
os.makedirs(output_directory, exist_ok=True)
print(f"output_directory: {output_directory}")

rgb_annot = rep.AnnotatorRegistry.get_annotator("rgb")
rgb_annot.attach(render_product)

basic_writer = rep.WriterRegistry.get("BasicWriter")
basic_writer.initialize(output_dir=output_directory, rgb=True)
basic_writer.attach(render_product)

async def write_data_async():
    for i in range(NUM_FRAMES):
        print(f"Frame {i}")
        await rep.orchestrator.step_async()
        save_rgb(rgb_annot.get_data(), f"{output_directory}/rgb_annot_{i}")

asyncio.ensure_future(write_data_async())

Thanks for the reply @ahaidu . The problem is not with the static objects data but it is with the moving objects(apart from humans) like trolley. The static images are giving perfect results.

This probably happens because every replicator step the timeline advances with one frame.

if you run this code do the objects move in the scene?

import asyncio
import omni.replicator.core as rep

async def step_frames_async():
    for i in range(100):
        print(f"Frame {i}")
        await rep.orchestrator.step_async()

asyncio.ensure_future(write_data_async())

If yes, make sure you disable the capture on play flag, which advances the timeline every step. The pause_timeline flag also needs to be set to False atm.

import asyncio
import omni.timeline
import omni.replicator.core as rep

rep.orchestrator.set_capture_on_play(False)
timeline = omni.timeline.get_timeline_interface()

async def step_frames_async():
    for i in range(100):
        print(f"Frame {i}, timeline time={timeline.get_current_time()}")
        await rep.orchestrator.step_async(pause_timeline=False)
        if timeline.is_playing():
            timeline.pause()

asyncio.ensure_future(step_frames_async())