Save camera position as synthetic data

Is it possible to save a robot’s camera world position at each frame when using the Synthetic Data Recorder?

I found a possible solution by using the core API in the “Visualize Synthetic Data” tutorial:
https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/tutorial_replicator_visualize_groundtruth.html

The only problem is that I want to record a stereoscopic camera setup, but according to this tutorial, “Presently, the core API only supports a single camera, but additional sensors and camera features will be available in the future.”

So, if I run a separate simulation for each camera, will the rendered frames for each camera be at the exact same times?

Thanks!