Stereoscopic 3d rendering?

Is it possible to render dual eye Stereoscopic video?

I have glanced at the documentation, and it looks like you can render either the left, right, or mono eye. I am looking for rending out videos that can be watched on an HMD (an Oculus in my case). In the docs there is an asterix next to stereo:
“Kit Based apps allows these values to be set in accordance with the USD schema, however these settings are currently ignored by RTX rendering.”
Which, if I am reading correctly, means RTX ignores stereo?

Hello @terrygweems. I asked the rendering team for some help in answering your questions!

So I ran a quick test by rendering a left eye and right eye frame using the “stereo role” camera settings and changing the Projection type to fisheye spherical. I took those two frames into photoshop and pasted them into a psd so I could turn layers on and off to compare. The results seem basically identical, which is disappointing. There also seemed to be some lighting inconsistency as well. So bummer.

I would like to encourage the Omniverse team to add Stereoscopic rendering. Omniverse XR has a huge GPU requirement of a RTX A6000 or a RTX 3090. I have order mine! But for those who don’t have that kind of fire power, rendering out stereoscopic movies is a great way to experience your projects in VR without needing a high powered card.

Oh and I realize you still render in fisheye, but to really get that sense of “presence” in VR you need that eye offset.

So it occurred to me that I could create an object, parent 2 cameras with it, and offset the cameras by 2.5 inches. Render out each eye, then composite both in an after effects to create the side by side or top and bottom image needed by a stereo viewer/oculus. I had to create rigs like this back in the 90s, but haven’t needed to do so for a long time. I can’t remember if the cameras had a slight rotation on them. The idea is to recreate the view of the human eye, Anyone know?

1 Like

Someone I worked with back in the 90’s remembered a bit more. The rotation I am kind of remembering pertains to the convergence point of when the eyes focus on something. I can probably create the effect using some kind of point at/look at constraint. Or just ignore it and keep everything in the mid-range distance.

Doing some quick tests, perhaps I will share them here. After some thought, I do think that my concern about a convergence point is not important. I will be rendering 360 images; a convergence point is probably not needed and possibly counterproductive.

It would be great to hear from the Nvidia team