First two frames in SDG in 4.5 are trash

I’m using the Synthetic Data Recorder in IsaacSim 4.5 and the first two frames are quite reliably trash. Sometimes the animated people (omni.anim.people) are not loaded, sometimes it looks like the attached image.

I don’t use Subframes (and I don’t want to run Subframes for the whole sequence to only improve the first frames).

Would it be possible to warm up the render pipeline internally before publishing those images?

Similar problems here.

1 Like

Small update: I’m working with BehaviorScript and access my cameras there. The get_current_frame()-Method gives me a ‘rendering_frame’-information that during normal operations looks like the number of frames rendered since the file was opened (not(!) the number of frames since the start of the simulation).

For the first two frames however, I get a rendering_frame-value of 0, so it looks like the system knows that the first two iterations (on_update-calls) are something special.

I think it would also be great to not trigger the on_update-method for these frames.

Hi there,

would it be possible to provide more information on your workflow? E.g. running in standalone, or from script editor/UI. Accessing data through sensors, annotators, or writers? Advancing the simulation through isaac sim world or through the timeline? Depening on how you run the pipeline you could wait for a few app updates whenever loading new assets/scenarios to make sure everything is loaded.

Regarding subframes, yes, the recorder will run with the given subframes the whole recording. The recorder is thought for quick UI based data collection/validation of various replicator writers and does not have any custom logic built in.

Cheers

Hey!
Thanks for looking into the issue!

I have the issue with these two workflows:

IsaacSim 4.5 on Ubuntu 22.04

  • Setup a scene with some people (Agent SDG) and a Camera
  • Setup the SDR with a Render Product for the new Camera,
  • In the SDR-Parameters, select the rgb-flag for the Writer
  • 0 RTSubframes (looks good for other frames)

And

  • Create a BehaviorScript and attach it to /World
  • in the on_init, select the Camera by prim_path, create omni.isaac.sensor-Camera
  • in the on_update, use camera.get_current_frame()[‘rgba’] to get a numpy image and store it

Can you provide more details regarding these steps? Are you using the synthetic data recorder? If so, what is the camera sensor used for? Do you need both workflows?

I’m sorry, I used the Synthetic Data Recorder (fixed the typos above).

I’m using the Behavior-based data collection to capture a single orthographic image from the whole scene for visualization of ground truth positions later in the process, but I guess I could also use a custom Writer instead. (Is there a way to use multiple Writers with the SD Recorder?)

The recorder can only work with one camera (you can however use custom cameras if needed). For more flexible workflows it is recommended using a scripting approach, this way you can for example use the rt_subframes argument on the orchestrator.step() function which will render that given frame with more subframes which should fix the artifacts you are seeing.

Here are some tutorials on getting started with the scripts:

Let me know if any other snippets might be useful so we can try adding it to the list.

Cheers