External Extensions: OpenXR compact binding for creating extended reality applications

Hi @j.tigue

This error is related to a bad rendering configuration.
Unfortunately I don’t have access to the VR equipment until I return from vacation for the end of the year. Until then it is impossible for me to test the code :(

However, there are some things you can try and show here in order to solve this issue…

  • Test the return of the main functions

    The main functions that take care of configuring the system return a boolean value.
    Then it is posible to know what configuration step is wrong…

    print("xr.init:", xr.init())
    print("xr.create_instance:", xr.create_instance())
    print("xr.get_system:", xr.get_system())
    ...
    print("xr.create_session:", xr.create_session())
    
  • Inspect output messages from the extension on the terminal

    Messages related to the configuration and use of this extension should appear in the terminal where Isaac Sim is running. To get more detailed output, set the following environment variables in the terminal prior to running Isaac Sim

    export XR_LOADER_DEBUG=all
    export VR_LOG_DEBUG=1
    

Thanks for the assist. One other thing I should note is that I am running with a “null” headset driver trying to just use the trackers/controllers:

##----------------------------Below are the outputs:
print(“Init:”,xr.init())
print(“Create Instance:”,xr.create_instance())
print(“Get System:”,xr.get_system())
print(“Create Session:”,xr.create_session())

##--------------------------------Output:
Init: True
Create Instance: True
Get System: True
Create Session: True

##----------------------------Extension Loading:
[8.160s] [ext: omni.add_on.openxr-0.0.1] startup
[8.190s] app started
[15.128s] Isaac Sim App is loaded.
[15.185s] Checking for Isaac Sim assets on Nucleus
[15.240s] Nucleus detected successfully: omniverse://localhost
[INFO] OpenXR initialized using pybind11 interface
Init: True
OpenXR API layers (0)
OpenXR extensions (13)
|-- XR_KHR_vulkan_enable
|-- XR_KHR_vulkan_enable2
|-- XR_KHR_opengl_enable
| (requested)
|-- XR_KHR_binding_modification
|-- XR_VALVE_analog_threshold
|-- XR_EXT_hand_tracking
|-- XR_EXT_hand_joints_motion_range
|-- XR_EXT_hp_mixed_reality_controller
|-- XR_HTC_vive_cosmos_controller_interaction
|-- XR_KHR_visibility_mask
|-- XR_UNITY_hand_model_pose
|-- XR_KHR_composition_layer_depth
|-- XR_EXT_debug_utils
Create Instance: True
Runtime
|-- name: SteamVR/OpenXR
|-- version: 0.1.0
System
|-- system id: 1153031043452764510
|-- system name: SteamVR/OpenXR : null
|-- vendor id: 10462
|-- max layers: 16
|-- max swapchain height: 2056
|-- max swapchain width: 1852
|-- orientation tracking: 1
|-- position tracking: 1
View configurations (1)
|-- type 2 (The OpenXR Specification)
| (requested)
View configuration properties
|-- configuration type: 2 (The OpenXR Specification)
|-- fov mutable (bool): 1
View configuration views (2)
|-- view 0
| |-- recommended resolution: 1852 x 2056
| |-- max resolution: 1852 x 2056
| |-- recommended swapchain samples: 1
| |-- max swapchain samples: 1
|-- view 1
| |-- recommended resolution: 1852 x 2056
| |-- max resolution: 1852 x 2056
| |-- recommended swapchain samples: 1
| |-- max swapchain samples: 1
Environment blend modes (1)
|-- mode: 1 (The OpenXR Specification)
| (requested)
Get System: True
OpenGL requirements
|-- min API version: 4.3.0
|-- max API version: 4.6.0
Graphics binding: OpenGL
|-- xDisplay: 0x1dfe0ac0
|-- visualid: 0
|-- glxFBConfig: 0
|-- glxDrawable: 121634827
|-- glxContext: 0x1dfc0208
Reference spaces (3)
|-- type: 1 (The OpenXR Specification)
| |-- reference space bounds
| | |-- width: 0
| | |-- height: 0
|-- type: 2 (The OpenXR Specification)
| |-- reference space bounds
| | |-- width: 0
| | |-- height: 0
|-- type: 3 (The OpenXR Specification)
| |-- reference space bounds
| | |-- width: 1
| | |-- height: 1
Suggested interaction bindings by profiles
|-- /interaction_profiles/khr/simple_controller (1)
|-- /interaction_profiles/google/daydream_controller (1)
|-- /interaction_profiles/htc/vive_controller (2)
|-- /interaction_profiles/htc/vive_pro (0)
|-- /interaction_profiles/microsoft/motion_controller (2)
|-- /interaction_profiles/microsoft/xbox_controller (0)
|-- /interaction_profiles/oculus/go_controller (1)
|-- /interaction_profiles/oculus/touch_controller (2)
|-- /interaction_profiles/valve/index_controller (2)
Swapchain formats (8)
|-- format: 32859
|-- format: 34842
| (selected)
|-- format: 34843
|-- format: 35905
|-- format: 35907
|-- format: 33189
|-- format: 33190
|-- format: 33191
Created swapchain (2)
|-- swapchain: 0
| |-- width: 1852
| |-- height: 2056
| |-- sample count: 1
| |-- swapchain images: 3
|-- swapchain: 1
| |-- width: 1852
| |-- height: 2056
| |-- sample count: 1
| |-- swapchain images: 3
Create Session: True

Hi @j.tigue

According to the logs, the configuration looks good.
Regarding the use of the HTC Vive Trackers without Headset, this blog refers to OpenVR, however, the omni.add_on.openxr extension is built using the OpenXR standard.

According to the OpenXR standard, it is necessary to call the render loop in order to update the actions (controller pose, for example)*.

Then, you can create an empty render function to change the default frame loop behaviour. By doing this, the graphics handler will not be called and no graphics will be rendered.

def render_callback(num_views, views, configuration_views):
     pass

xr.subscribe_render_event(render_callback)

I will have access to VR equipment next week.
By the way, are you able to run the default example (code shown in GitHub) using the full VR equipment (with the headset)?

*Note: I remember I read and tested it but I can not find the right reference in the specification right now. The closest reference I can found is the OpenXR session-lifecycle

Also, your configuration refers to a stereo configuration (XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO):

View configurations (1)
|-- type 2 (https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#XrViewConfigurationType)

However, in the python code you provided you are calling the setup_mono_view instead of the setup_stereo_view method

@toni.sm

That did it. After I added the render_callback (and subscription) and changed to setup_stereo_view it worked.

As far as the HTC Vive Trackers without Headset I was able to operate without the headset just using the normal HTC vive controllers. OpenXR using the blog-post documentation.

Here is code that was working:
testSteamVRInterface.py (2.5 KB)

Thanks for all of your help!

Hi @j.tigue

Glad to hear it worked 😁

Regarding the next quote, I could not fully understand 🙈. Could you please provide more details about it? Did you get the trackers pose? What code did you use for it? Using OpenXR and not OpenVR?

@toni.sm
I was able to get either the HTC Vive Trackers or the HTC Vive Controllers connected to SteamVR without needing a headset (goggles/HMD) which operates with OpenXR. As the blog pose suggests you need to make a few changes to some setting files:

  1. in “SteamDirectory\steamapps\common\SteamVR\drivers\null\resources\settings\default.vrsettings”
    “enable”: true,

  2. “SteamDirectory\config\steamvr.vrsettings”
    “forcedDriver”: “null”,
    “activateMultipleDrivers”: “true”,

  3. “SteamDirectory\steamapps\common\SteamVR\resources\settings\default.vrsettings”
    “requireHmd”: false,
    “forcedDriver”: “null”,
    “activateMultipleDrivers”: true,

This allows the tracking of controllers or trackers without the use of a headset. A virtual headset is still displayed on the monitor but it can be neglected. I am still dealing with calibration of the workspace issues without the headset but it is currently tracking based on one of the base stations as the origin.

The openXR binding extension (that you created/provided) connects the the SteamVR/OpenXR information to Isaac Sim. So far I am only connecting the controller pose/trigger/haptics and not the tracker pose.

2 Likes

Thank you for clarifying 😁

After following these, my steam vr just crashed the OS upon startup.

another question.
is it “true” or true? and “null” or null?

Can you provide these config files?

@toni.sm after I hit the stop button, how do I restart? It seems whatever I do will crash isaac sim. Is there any clean-up code I need to have?

Hi @nikepupu9

Yeah, you got me…
It’s a bug in the code (which I’m pending to fix 😅🙈).

Temporary solution: restart Isaac Sim (the classic solution for all problems)

1 Like

@toni.sm
Hello, I really appreciate your work, it works well for me.

However, I’m working on a project which we don’t want to change the pose of camera by the pose of HMD (we want to control the view by other input). And, It seems like there is no option for that, or maybe I’m missing somthing in your documentation.

I’m wondering if it’s possible for you to help me with this problem, thanks.

Hi @ableho01

Good news.
The API is designed to be compact but flexible enough to allow users to do whatever they want within the limits of OpenXR (and the API itself of course 😅)…

If you want to change the way the data is sent/rendered to the HDM you can program and subscribe your own function to the render event using the subscribe_render_event method…

The default implementation (when no function is subscribed) uses an internal callback that performs processing similar to the following code

def _internal_render(num_views, views, configuration_views):
    # teleport left camera using the HMD's position and orientation
    position = views[0].pose.position
    rotation = views[0].pose.orientation
    position = Gf.Vec3d(position.x, -position.z, position.y) * STAGE_UNIT
    rotation = Gf.Quatd(rotation.w, rotation.x, rotation.y, rotation.z) * LEFT_RECTIFICATION_QUAT
    xr.teleport_prim(LEFT_CAMERA_PRIM, position, rotation, INITIAL_REFERENCE_POSITION, INITIAL_REFERENCE_ROTATION)            

    # teleport right camera using the HMD's position and orientation
    if num_views == 2:
        position = views[1].pose.position
        rotation = views[1].pose.orientation
        position = Gf.Vec3d(position.x, -position.z, position.y) * STAGE_UNIT
        rotation = Gf.Quatd(rotation.w, rotation.x, rotation.y, rotation.z) * RIGHT_RECTIFICATION_QUAT
        xr.teleport_prim(RIGHT_CAMERA_PRIM, position, rotation, INITIAL_REFERENCE_POSITION, INITIAL_REFERENCE_ROTATION)
    
    # acquire frames
    frame_left = sensors.get_rgb(LEFT_VIEWPORT_WINDOW)
    frame_right = sensors.get_rgb(RIGHT_VIEWPORT_WINDOW) if num_views == 2 else None

    # send frame to the HMD
    xr.set_frames(configuration_views, frame_left, frame_right)

Where:

  • STAGE_UNIT is a float
  • LEFT_RECTIFICATION_QUAT and RIGHT_RECTIFICATION_QUAT are Quatd that rotate the camera according to the stereo rectification
  • INITIAL_REFERENCE_POSITION and INITIAL_REFERENCE_ROTATION is the pose of the origin of the reference system
  • LEFT_CAMERA_PRIM and RIGHT_CAMERA_PRIM are the camera prims
  • LEFT_VIEWPORT_WINDOW and RIGHT_VIEWPORT_WINDOW are the vieport windows associated to each camera

Note that the Isaac Sim camera position axis differs from the OpenXR position axis

Isaac Sim (X, Y, Z) = OpenXR (X, -Z, Y)

Then, you can program a custom function to transform the camera according to your specifications…

1 Like

@toni.sm
Thanks for your quick reply, I’m really appreciate.

But, ther’s still a small problem: the ‘sensor’ is not defined.

I combined the example code and the one you just replied, like below:

import omni
import numpy
from omni.add_on.openxr import _openxr
from pxr import Gf

# acquire interface
xr = _openxr.acquire_openxr_interface()

# setup OpenXR application using default parameters
xr.init()
xr.create_instance()
xr.get_system()

# view_callback
def view_callback(num_views, views, configuration_views):    
    # acquire frames
    frame_left = sensors.get_rgb(LEFT_VIEWPORT_WINDOW)
    frame_right = sensors.get_rgb(RIGHT_VIEWPORT_WINDOW) if num_views == 2 else None

    # send frame to the HMD
    xr.set_frames(configuration_views, frame_left, frame_right)
    
#subscribe render event
xr.subscribe_render_event(callback = view_callback)

# create session and define interaction profiles
xr.create_session()

# setup cameras and viewports and prepare rendering using the internal callback
xr.setup_stereo_view("/World/Head/left_eye_ball/left_eye_cam", "/World/Head/right_eye_ball/right_eye_cam")
xr.set_frame_transformations(flip=0)
xr.set_stereo_rectification(y=0.05)

# execute action and rendering loop on each simulation step
i=0
def on_simulation_step(step):
    if xr.poll_events() and xr.is_session_running():
        
        xr.render_views(_openxr.XR_REFERENCE_SPACE_TYPE_STAGE)

physx_subs = omni.physx.get_physx_interface().subscribe_physics_step_events(on_simulation_step)

And, I get the error:

the ‘sensor’ is not defined.

I believe that the ‘sensor’ is well defined in the default. But when i try to use the subscribe_render_event method in script, the definition is kind of being forgetten. Maybe a simple import or some lines can fix it?

This approach reads the images using the omni.syntheticdata extension…

from omni.syntheticdata import sensors

Note that the camera sensor needs to be initialized… otherwise, you will get an empty frame/array during the first simulation steps (ValueError: cannot reshape array of size 0 into shape (0,0,newaxis))

The following post may help with the initialization process

1 Like

Thanks for your help, again.

God, I hate myself keep asking some basic questions. But I still have a problem, I aware that LEFT_VIEWPORT_WINDOW and RIGHT_VIEWPORT_WINDOW are the viewport windows associated to each camera. And they’re are don’t defined in script, too. I’m not familiar with using scripts (only used UI before), so I didn’t figure out how to define the viewport to these two variables.

It seems that in the line:

xr.setup_stereo_view("/World/Head/left_eye_ball/left_eye_cam", "/World/Head/right_eye_ball/right_eye_cam")

already defined these two, but I can’t access them from script.

Therefore, I tried to find another way to get these two viewport. And from the link you gave, it looks like that I could get the viewport like this two lines:

viewport_handle = omni.kit.viewport.get_viewport_interface().create_instance()
viewport_window = omni.kit.viewport.get_viewport_interface().get_viewport_window(viewport_handle)

But how can I speficy the correct viewports for left and right?

Hi @ableho01

I use a function similar to the next to get or create the viewport windows…

def get_or_create_vieport_window(camera, teleport=True, window_size=(400, 300), resolution=(1280, 720)):
    window = None
    camera = str(camera.GetPath() if type(camera) is Usd.Prim else camera)
    # get viewport window
    for interface in VIEWPORT_INTERFACE.get_instance_list():
        w = VIEWPORT_INTERFACE.get_viewport_window(interface)
        if camera == w.get_active_camera():
            window = w
            # check visibility
            if not w.is_visible():
                w.set_visible(True)
            break
    # create viewport window if not exist
    if window is None:
        window = VIEWPORT_INTERFACE.get_viewport_window(VIEWPORT_INTERFACE.create_instance())
        window.set_window_size(*window_size)
        window.set_active_camera(camera)
        window.set_texture_resolution(*resolution)
        if teleport:
            window.set_camera_position(camera, 1.0, 1.0, 1.0, True)
            window.set_camera_target(camera, 0.0, 0.0, 0.0, True)
    return window

where VIEWPORT_INTERFACE is omni.kit.viewport.get_viewport_interface()


Now, you are trying, it makes sense to allow access to the main variables created by some functions such as:

  • The setup_stereo_view method:

    The viewports

    • self._viewport_window_left
    • self._viewport_window_right

    The camera prims

    • self._prim_left
    • self._prim_right
  • The set_stereo_rectification method:

    The quaternions for stereo rectification

    • self._rectification_quat_left
    • self._rectification_quat_right

Can you please, do some tests and try to access those properties after calling the respective methods?
Unfortunately, I don’t have the VR equipment with me right now :(

e.g.: xr._viewport_window_left

1 Like

Thank for all your help!! REALLY appreciate it.
I can see the image in my HMD now!!
The setup_stereo_view method works. And, this is my code (may not be the best way):

import omni
import numpy
from omni.add_on.openxr import _openxr
from pxr import Gf
from omni.syntheticdata import sensors
import omni.syntheticdata._syntheticdata as sd

_sd_interface = sd.acquire_syntheticdata_interface()
is_sensor_initialized = False

# acquire interface
xr = _openxr.acquire_openxr_interface()

# setup OpenXR application using default parameters
xr.init()
xr.create_instance()
xr.get_system()

# view_callback
def view_callback(num_views, views, configuration_views):    
    # acquire frames
    global LEFT_VIEWPORT_WINDOW
    global RIGHT_VIEWPORT_WINDOW
    frame_left = sensors.get_rgb(LEFT_VIEWPORT_WINDOW)
    frame_right = sensors.get_rgb(RIGHT_VIEWPORT_WINDOW) if num_views == 2 else None

    # send frame to the HMD
    xr.set_frames(configuration_views, frame_left, frame_right)
    
#subscribe render event
xr.subscribe_render_event(callback = view_callback)

# create session and define interaction profiles
xr.create_session()

# setup cameras and viewports and prepare rendering using the internal callback
xr.setup_stereo_view("/World/Head/left_eye_ball/left_eye_cam", "/World/Head/right_eye_ball/right_eye_cam")
LEFT_VIEWPORT_WINDOW = xr._viewport_window_left
RIGHT_VIEWPORT_WINDOW = xr._viewport_window_right
xr.set_frame_transformations(flip=0)
xr.set_stereo_rectification(y=0.05)

# execute action and rendering loop on each simulation step

def on_simulation_step(step):
    global is_sensor_initialized
    global _sd_interface
    global LEFT_VIEWPORT_WINDOW
    global RIGHT_VIEWPORT_WINDOW
    if xr.poll_events() and xr.is_session_running():
        if not is_sensor_initialized:
            print("Waiting for sensor to initialize")
            sensor_left = sensors.create_or_retrieve_sensor(LEFT_VIEWPORT_WINDOW, sd.SensorType.Rgb)
            is_sensor_initialized_left = _sd_interface.is_sensor_initialized(sensor_left)
            sensor_right = sensors.create_or_retrieve_sensor(RIGHT_VIEWPORT_WINDOW, sd.SensorType.Rgb)
            is_sensor_initialized_right = _sd_interface.is_sensor_initialized(sensor_right)
            is_sensor_initialized = is_sensor_initialized_right and is_sensor_initialized_left
            if is_sensor_initialized:
                print("Sensor initialized!")
        if is_sensor_initialized:
             xr.render_views(_openxr.XR_REFERENCE_SPACE_TYPE_STAGE)

physx_subs = omni.physx.get_physx_interface().subscribe_physics_step_events(on_simulation_step)

The result:

Hi @ableho01

Glad to hear it works
And many thanks for trying and testing the extension :)