External Extensions: OpenXR compact binding for creating extended reality applications

After following these, my steam vr just crashed the OS upon startup.

another question.
is it “true” or true? and “null” or null?

Can you provide these config files?

@toni.sm after I hit the stop button, how do I restart? It seems whatever I do will crash isaac sim. Is there any clean-up code I need to have?

Hi @nikepupu9

Yeah, you got me…
It’s a bug in the code (which I’m pending to fix 😅🙈).

Temporary solution: restart Isaac Sim (the classic solution for all problems)

1 Like

@toni.sm
Hello, I really appreciate your work, it works well for me.

However, I’m working on a project which we don’t want to change the pose of camera by the pose of HMD (we want to control the view by other input). And, It seems like there is no option for that, or maybe I’m missing somthing in your documentation.

I’m wondering if it’s possible for you to help me with this problem, thanks.

Hi @ableho01

Good news.
The API is designed to be compact but flexible enough to allow users to do whatever they want within the limits of OpenXR (and the API itself of course 😅)…

If you want to change the way the data is sent/rendered to the HDM you can program and subscribe your own function to the render event using the subscribe_render_event method…

The default implementation (when no function is subscribed) uses an internal callback that performs processing similar to the following code

def _internal_render(num_views, views, configuration_views):
    # teleport left camera using the HMD's position and orientation
    position = views[0].pose.position
    rotation = views[0].pose.orientation
    position = Gf.Vec3d(position.x, -position.z, position.y) * STAGE_UNIT
    rotation = Gf.Quatd(rotation.w, rotation.x, rotation.y, rotation.z) * LEFT_RECTIFICATION_QUAT
    xr.teleport_prim(LEFT_CAMERA_PRIM, position, rotation, INITIAL_REFERENCE_POSITION, INITIAL_REFERENCE_ROTATION)            

    # teleport right camera using the HMD's position and orientation
    if num_views == 2:
        position = views[1].pose.position
        rotation = views[1].pose.orientation
        position = Gf.Vec3d(position.x, -position.z, position.y) * STAGE_UNIT
        rotation = Gf.Quatd(rotation.w, rotation.x, rotation.y, rotation.z) * RIGHT_RECTIFICATION_QUAT
        xr.teleport_prim(RIGHT_CAMERA_PRIM, position, rotation, INITIAL_REFERENCE_POSITION, INITIAL_REFERENCE_ROTATION)
    
    # acquire frames
    frame_left = sensors.get_rgb(LEFT_VIEWPORT_WINDOW)
    frame_right = sensors.get_rgb(RIGHT_VIEWPORT_WINDOW) if num_views == 2 else None

    # send frame to the HMD
    xr.set_frames(configuration_views, frame_left, frame_right)

Where:

  • STAGE_UNIT is a float
  • LEFT_RECTIFICATION_QUAT and RIGHT_RECTIFICATION_QUAT are Quatd that rotate the camera according to the stereo rectification
  • INITIAL_REFERENCE_POSITION and INITIAL_REFERENCE_ROTATION is the pose of the origin of the reference system
  • LEFT_CAMERA_PRIM and RIGHT_CAMERA_PRIM are the camera prims
  • LEFT_VIEWPORT_WINDOW and RIGHT_VIEWPORT_WINDOW are the vieport windows associated to each camera

Note that the Isaac Sim camera position axis differs from the OpenXR position axis

Isaac Sim (X, Y, Z) = OpenXR (X, -Z, Y)

Then, you can program a custom function to transform the camera according to your specifications…

1 Like

@toni.sm
Thanks for your quick reply, I’m really appreciate.

But, ther’s still a small problem: the ‘sensor’ is not defined.

I combined the example code and the one you just replied, like below:

import omni
import numpy
from omni.add_on.openxr import _openxr
from pxr import Gf

# acquire interface
xr = _openxr.acquire_openxr_interface()

# setup OpenXR application using default parameters
xr.init()
xr.create_instance()
xr.get_system()

# view_callback
def view_callback(num_views, views, configuration_views):    
    # acquire frames
    frame_left = sensors.get_rgb(LEFT_VIEWPORT_WINDOW)
    frame_right = sensors.get_rgb(RIGHT_VIEWPORT_WINDOW) if num_views == 2 else None

    # send frame to the HMD
    xr.set_frames(configuration_views, frame_left, frame_right)
    
#subscribe render event
xr.subscribe_render_event(callback = view_callback)

# create session and define interaction profiles
xr.create_session()

# setup cameras and viewports and prepare rendering using the internal callback
xr.setup_stereo_view("/World/Head/left_eye_ball/left_eye_cam", "/World/Head/right_eye_ball/right_eye_cam")
xr.set_frame_transformations(flip=0)
xr.set_stereo_rectification(y=0.05)

# execute action and rendering loop on each simulation step
i=0
def on_simulation_step(step):
    if xr.poll_events() and xr.is_session_running():
        
        xr.render_views(_openxr.XR_REFERENCE_SPACE_TYPE_STAGE)

physx_subs = omni.physx.get_physx_interface().subscribe_physics_step_events(on_simulation_step)

And, I get the error:

the ‘sensor’ is not defined.

I believe that the ‘sensor’ is well defined in the default. But when i try to use the subscribe_render_event method in script, the definition is kind of being forgetten. Maybe a simple import or some lines can fix it?

This approach reads the images using the omni.syntheticdata extension…

from omni.syntheticdata import sensors

Note that the camera sensor needs to be initialized… otherwise, you will get an empty frame/array during the first simulation steps (ValueError: cannot reshape array of size 0 into shape (0,0,newaxis))

The following post may help with the initialization process

1 Like

Thanks for your help, again.

God, I hate myself keep asking some basic questions. But I still have a problem, I aware that LEFT_VIEWPORT_WINDOW and RIGHT_VIEWPORT_WINDOW are the viewport windows associated to each camera. And they’re are don’t defined in script, too. I’m not familiar with using scripts (only used UI before), so I didn’t figure out how to define the viewport to these two variables.

It seems that in the line:

xr.setup_stereo_view("/World/Head/left_eye_ball/left_eye_cam", "/World/Head/right_eye_ball/right_eye_cam")

already defined these two, but I can’t access them from script.

Therefore, I tried to find another way to get these two viewport. And from the link you gave, it looks like that I could get the viewport like this two lines:

viewport_handle = omni.kit.viewport.get_viewport_interface().create_instance()
viewport_window = omni.kit.viewport.get_viewport_interface().get_viewport_window(viewport_handle)

But how can I speficy the correct viewports for left and right?

Hi @ableho01

I use a function similar to the next to get or create the viewport windows…

def get_or_create_vieport_window(camera, teleport=True, window_size=(400, 300), resolution=(1280, 720)):
    window = None
    camera = str(camera.GetPath() if type(camera) is Usd.Prim else camera)
    # get viewport window
    for interface in VIEWPORT_INTERFACE.get_instance_list():
        w = VIEWPORT_INTERFACE.get_viewport_window(interface)
        if camera == w.get_active_camera():
            window = w
            # check visibility
            if not w.is_visible():
                w.set_visible(True)
            break
    # create viewport window if not exist
    if window is None:
        window = VIEWPORT_INTERFACE.get_viewport_window(VIEWPORT_INTERFACE.create_instance())
        window.set_window_size(*window_size)
        window.set_active_camera(camera)
        window.set_texture_resolution(*resolution)
        if teleport:
            window.set_camera_position(camera, 1.0, 1.0, 1.0, True)
            window.set_camera_target(camera, 0.0, 0.0, 0.0, True)
    return window

where VIEWPORT_INTERFACE is omni.kit.viewport.get_viewport_interface()


Now, you are trying, it makes sense to allow access to the main variables created by some functions such as:

  • The setup_stereo_view method:

    The viewports

    • self._viewport_window_left
    • self._viewport_window_right

    The camera prims

    • self._prim_left
    • self._prim_right
  • The set_stereo_rectification method:

    The quaternions for stereo rectification

    • self._rectification_quat_left
    • self._rectification_quat_right

Can you please, do some tests and try to access those properties after calling the respective methods?
Unfortunately, I don’t have the VR equipment with me right now :(

e.g.: xr._viewport_window_left

1 Like

Thank for all your help!! REALLY appreciate it.
I can see the image in my HMD now!!
The setup_stereo_view method works. And, this is my code (may not be the best way):

import omni
import numpy
from omni.add_on.openxr import _openxr
from pxr import Gf
from omni.syntheticdata import sensors
import omni.syntheticdata._syntheticdata as sd

_sd_interface = sd.acquire_syntheticdata_interface()
is_sensor_initialized = False

# acquire interface
xr = _openxr.acquire_openxr_interface()

# setup OpenXR application using default parameters
xr.init()
xr.create_instance()
xr.get_system()

# view_callback
def view_callback(num_views, views, configuration_views):    
    # acquire frames
    global LEFT_VIEWPORT_WINDOW
    global RIGHT_VIEWPORT_WINDOW
    frame_left = sensors.get_rgb(LEFT_VIEWPORT_WINDOW)
    frame_right = sensors.get_rgb(RIGHT_VIEWPORT_WINDOW) if num_views == 2 else None

    # send frame to the HMD
    xr.set_frames(configuration_views, frame_left, frame_right)
    
#subscribe render event
xr.subscribe_render_event(callback = view_callback)

# create session and define interaction profiles
xr.create_session()

# setup cameras and viewports and prepare rendering using the internal callback
xr.setup_stereo_view("/World/Head/left_eye_ball/left_eye_cam", "/World/Head/right_eye_ball/right_eye_cam")
LEFT_VIEWPORT_WINDOW = xr._viewport_window_left
RIGHT_VIEWPORT_WINDOW = xr._viewport_window_right
xr.set_frame_transformations(flip=0)
xr.set_stereo_rectification(y=0.05)

# execute action and rendering loop on each simulation step

def on_simulation_step(step):
    global is_sensor_initialized
    global _sd_interface
    global LEFT_VIEWPORT_WINDOW
    global RIGHT_VIEWPORT_WINDOW
    if xr.poll_events() and xr.is_session_running():
        if not is_sensor_initialized:
            print("Waiting for sensor to initialize")
            sensor_left = sensors.create_or_retrieve_sensor(LEFT_VIEWPORT_WINDOW, sd.SensorType.Rgb)
            is_sensor_initialized_left = _sd_interface.is_sensor_initialized(sensor_left)
            sensor_right = sensors.create_or_retrieve_sensor(RIGHT_VIEWPORT_WINDOW, sd.SensorType.Rgb)
            is_sensor_initialized_right = _sd_interface.is_sensor_initialized(sensor_right)
            is_sensor_initialized = is_sensor_initialized_right and is_sensor_initialized_left
            if is_sensor_initialized:
                print("Sensor initialized!")
        if is_sensor_initialized:
             xr.render_views(_openxr.XR_REFERENCE_SPACE_TYPE_STAGE)

physx_subs = omni.physx.get_physx_interface().subscribe_physics_step_events(on_simulation_step)

The result:

Hi @ableho01

Glad to hear it works
And many thanks for trying and testing the extension :)