Using Vr extension from python standalone application using API

Hello everyone,

I am trying to connect my vr headset to my simulation environment in Isaac sim. Right now I am able to open my simulation environment from my python standalone, autoload the omni.kit.xr.profile.vr, start the vr and look into my simulation through vr and manipulate object. I can’t move nor teleport around in my scene, which would be useful for my use case

What I want to do though is use the python library of the extension to be able to interact with the vr form my python code (read the 6d pose of the controllers, read the buttons of the controller and so on). I have tried to look for documentation but with no luck.

Does anyone manage to successfully use the python extension or know about some documentation that could help me?

1 Like

@gubor22 I have done the same as you and also want to use the python library. I have not found the omni.kit.xr library. I am trying to figure out how to go futher. I do have a few suggestions that might at least kludge a solution.
After the VR starts, an xr-gui is created. The controllers and the headset (xrdisplaydevice0) are available. You can add objects to the stage and drag/drop them onto the controllers and headset. Now others can see you. Because the controllers, headset and other stuff are now available, I think you could use action script to read everything from the controllers and make the scene interactive. I also think you can add gui elements in the viewport where they can be interacted with. I am exploring this now. I do plan to experiment with some python scripts to also make the scene interactive. The kludge part is that you have to start VR to make the XR_GUI elements available.

You can have as many VR collaborators in a Live Session as you need, along side the Desktop collaborators. But to make it more useful, they need to be seen and be able to interact with the stage.

Yeah, once everything has been loaded then I can see through the vr headset and I can use the controllers to move objects. However, I am not able to move and I would like to be able to access the controller from my python script to activate functions based on the input of the user. Have you tried doing this?

I am busy trying to clone my nucleus server to a larger drive, so I have not had time to try anything. I believe once VR has started, you can use action script to interact with the items in the XR GUI tree. I am also new to trying this, but I think you can add read node in actionscript and somehow select the controller (drag drop). then move the controller around and you should see the values in the read node properties change. If that works I think a lot of things will work. If you would like to collaborate on trying this we can connect on email. I hope to explore this in the next day or two.

Hi @gubor22 @rthaker. I’m trying to do the same thing - read vr controller pose and inputs using python standalone workflow.

Did you come across a solution?

That was a while ago. I did not come across a solution.

I stopped trying, because I believe I was told that it was not currently in the software but would be at some point.

The only thing that I did was to attach objects to my controllers, so I could see my “hands” and others could see them. This helped when 2 people were in the same scene. I was able to have 2 people in VR manipulating the same objects. By attaching objects to hands and head we could see each other. I am not certain, but I think it was a easy drag/drop of a sphere to a VR object after the VR software started and created a VR object. Drag drop cubes to the controllers. If you try any of this and have problems I can go back and look at what I did.

Hey, I have done the same as bjcarcve, attached a shape to the vr controllers and track the pose from there