We read in a 3d world (in any of a variety of formats) which one produces in something like maya, 3dsmax, cinema4d etc, and have our own opengl engine which displays it in stereo for the rift.
Then we have: gps, accelerometers, and gyros, wired in on uart and I2C, along with algorithms to detect your gait… basically to get your translation vector. That, along with the head rotation info from the rift, builds the compound transformation matrix which keeps you moving through the world in sync with your physical movements.
We have a hand control (repurposed wii nunchuk) which the user uses to:
freeze the telemetry (for aligning real and virtual worlds)
select objects and do rudimentary i/o with the system
alpha blend a forward looking real-world camera with the virtual world to avoid collisions with real stuff
The basic idea is plug in a 3d world and start walking around in it.
We got a chance to show it off on the set of San Andreas (upcoming action flick) where it was well-liked.
Now it’s back on the set of a very well-known Disney movie, the name of which I can’t say right now.
We’re also planning on integrating a 2cm level accuracy RTK GPS for better position tracking. Right now it uses the accelerometers/gyros, then throws in the (not so great) GPS data when it is available.
We got a little funding which will take us through a demo in May or June, and if we get that right,
we’ll be looking for real funding and employees and so on.
That’s the basic story… let me know if you’re bored and looking for something to do :)
As far as making the code public, we’d be happy to do that at some point when we get some
breathing room. It helps put the rift back where it belongs, as a display device and deliverer of telemetry,
and not the center of the universe which seems their direction.