Synthetic data for novel view synthesis

Hi All,

I’d like to generate synthetic data for novel view synthesis. In practice it would require several hundreds to several thousands of overlapping images covering a specific scene. I would need a way to automatically generate proper camera poses (and camera configurations) and render the images from those viewpoints. No additional ground truth would be needed.

Could you recommend a method to do so in an automated fashion?

Best Regards,
Laszlo

Hi @user4084 Can you give me a bit more information about the scene you’re trying to generate synthetic data for? If you’re doing it for single isolated objects you could use the camera orbit function:

https://docs.omniverse.nvidia.com/py/replicator/1.11.16/source/extensions/omni.replicator.core/docs/API.html#omni.replicator.core.modify.pose_orbit

For complex scenes you’ll probably need to sample open space and move the camera to random (or ordered) positions and orientations. The scatter 3D functionality may assist here. I’ll ask one of my colleagues what he thinks and get back to you.

In the interim, could you add a bit more information on the type of scene you’re intending to use?