CloudXR AR virtual objects behind real objects


We have been testing CloudXR for almost half a year and we like it. We would like to overwrite the client application to expand it with some features (in AR).

Is it possible to set which object is visible and which is not? We would like to see in the sample application whether the boxes disappear, if we put our hands in front of them. How and/or where is the camera picture and the rendered frame merged?

There is not currently support for client AR depth, nor server depth buffer streaming, both needed to attempt to do a depth merge on the client. We simply alpha blend the AR stream over the camera feed.

Depth support is on the long-term roadmap, but I have no specific details beyond that.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.