Get deterministic results with Replicators

Hello,

I try to create a camera using the Replicator as follow:

import omni.replicator.core as rep
...

        self.camera_prim = rep.create.camera(
            rotation=(90, 90, 0),
            focal_length=self.focal_length,
            horizontal_aperture=self.horizontal_aperture,
            clipping_range=(self._near, self._far),
            parent=self.camera_prim_path)
        self.rp = rep.create.render_product(self.camera_prim, resolution=(self._render_width, self._render_height))
        self.depth_annotator = rep.AnnotatorRegistry.get_annotator("distance_to_image_plane")
        self.rgb_annotator = rep.AnnotatorRegistry.get_annotator("rgb")
        self.instance_segmentation_annotator = rep.AnnotatorRegistry.get_annotator("instance_segmentation")
        self.rgb_annotator.attach([self.rp])
        self.depth_annotator.attach([self.rp])

With this cameras, if I make an object fall and get the pose of my object several times at the same place in the code, I don’t get the same result each time. If I replace the camera by a camera made with SyntheticDataHelpers, it is not the case.
How could I get deterministic results?
Do I need to replace calls in my code to world.step() function by calls to rep.orchestrator.step() ?
Do I need to place the call to rep.orchestrator.run() at the start up of my code or should I call it once the annotators are all attached ?

Hi, I have tried to reproduce this issue with the replicator camera and dropping a block onto a collision surface. However, I am getting deterministic results. My code is below. Can you try to see if you get deterministic results also with this?


import omni.replicator.core as rep
import omni.timeline
from pxr import Sdf, UsdGeom, UsdPhysics, PhysicsSchemaTools
import asyncio

with rep.new_layer():
    cube1 = rep.create.cube(semantics=[('class', 'cube')],  position=(0, 100, 0), rotation = (30,30,30))
    plane = rep.create.plane(scale=10, position=(0,-200,0))

    camera_prim = rep.create.camera(
        rotation=(90, 90, 0),
        focal_length=2.4,
        horizontal_aperture=2.0955,
        clipping_range=(0.01, 10000.0))#,
        #parent=camera_prim_path)
    rp = rep.create.render_product(camera_prim, resolution=(640,480))
    depth_annotator = rep.AnnotatorRegistry.get_annotator("distance_to_image_plane")
    rgb_annotator = rep.AnnotatorRegistry.get_annotator("rgb")
    instance_segmentation_annotator = rep.AnnotatorRegistry.get_annotator("instance_segmentation")
    rgb_annotator.attach([rp])
    depth_annotator.attach([rp])


stage = omni.usd.get_context().get_stage()

prim_path = "/Replicator/Cube_Xform/Cube"
prim = stage.GetPrimAtPath(prim_path)
UsdPhysics.CollisionAPI.Apply(prim)
UsdPhysics.RigidBodyAPI.Apply(prim)

prim_path2 = "/Replicator/Plane_Xform/Plane"
prim2 = stage.GetPrimAtPath(prim_path2)
UsdPhysics.CollisionAPI.Apply(prim2)

omni.timeline.get_timeline_interface().play()


async def main():
    for i in range(20):
        print(f"stepping, {prim.GetAttribute('xformOp:translate').Get()}")
        await omni.kit.app.get_app().next_update_async()
        await asyncio.sleep(0.1)

    
    omni.timeline.get_timeline_interface().stop()

asyncio.ensure_future(main())

Hi, I did this code on a the Isaac-Prod public release version and I was able to reproduce the stochastic results – they seem to be stochastic with or without the camera. I’m looking into it more.