How to get viewer camera parameters?

Hi!

I’m developing a mouse-driven interaction module between user and objects in environment of isaacgym.
So I have to implement the function that derives 3D coordinate corresponding to the 2D pixel point from mouse.

In isaacgym manual, there are APIs to get the projection and view matrix of camera in each environment,
but I couldn’t find the APIs to get the same matrices of viewer camera, NOT the camera of environment.

How can I get the matrices?
or is there any other ways to implement the function?

Thanks

I didn’t get your questions.
Have you checked projectiles.py example?
you can get the mouse position using these codes:
pos = gym.get_viewer_mouse_position(viewer)
window_size = gym.get_viewer_size(viewer)
xcoord = round(pos.x * window_size.x)
ycoord = round(pos.y * window_size.y)

Thanks for the reply.

I know how to get the parameters of camera described in projectiles.py example.
That is the parameters for individual camera which samples input image of its own environment.

But what I want to know is the parameters of viewer camera, which samples the rendering results of all environments.
In other words, in case of headless == False, users can observe all the environments at once, and are able to pan, rotate, zoom in/out and so on through the viewer camera.

Again, what I mean is that the viewer camera makes users observe, manipulate the view point by mouse when isaac gym GUI pops up and I wanted to know the method to get the parameters of that camera.

You mean intrinsic/extrinsic parameters of the main viewer camera?
I don’t know if we have access to its parameters or even if it’s constant during the simulation or it changes depending on distance from floor and other stuff.
but if it is constant one funny way to find it is using machine vision camera calibration methods,
You should import a chessboard pattern(or an object) to the environment, move the main camera around it and save the pictures, then take the pictures to a camera calibration software like Matlab machine vision toolbox and find the camera parameters. I’m guessing it would take a few hours to make it work.
Does it make sense? :)

You mean do camera calibration in simulation environment just like we normally do in real world with real camera.
Yes that makes sense though it is a little bit cumbersome :)

I’ll try it if I couldn’t find any other better methods.
Thanks!

I read the documents again, it seems like the main viewer camera is being treated differently from camera sensors, since you don’t create a handle for it and you pass the camera properties to the create_viewer.
One thing which can be done is to create a camera sensor camera_handle = gym.create_camera_sensor(env, camera_props) using the same camera properties used for the main camera viewer and put it at the same location and see how different are the pictures from two cameras. I’m guessing they should be similar. if they are then use
projection_matrix = np.matrix(gym.get_camera_proj_matrix(sim, env, camera_handle))
view_matrix = np.matrix(gym.get_camera_view_matrix(sim, env, camera_handle))
to get the camera intrinsic.
But that method i mentioned shouldn’t be very difficult :) graphics.py is a good start, you just need to enter a chessboard texture and save the pictures by moving the main camera around it . It’s a good practice to validate the camera intrinsic parameters.

Another thing is that the camera is a pinhole camera so you should be able to retrieve its parameters from camera properties.
If I had time I would test all the 3 methods to get to the bottom of it :)