Incorrect Camera View Matrix?

Hi NVIDIA,

I am extending your point cloud code to a scenario where I have multiple environments running in parallel. Even though I used the exact same set up for each environment, the generated point clouds are not the same.

To be exact, the point clouds of the first env and second environments are 2 meters apart (see image below for visualization in open3D and pptk). I also printed out the points matrices and confirmed that they’re different.


I realized that this line of your code prints out different matrices for different environments. vinv = np.linalg.inv(np.matrix(gym.get_camera_view_matrix(sim, env, cam_handles[c]))) Shouldn’t the camera view matrix be the same for all environments, given that the setups are the same? I believe that this is the main cause of the issue

Please see below for my code which creates two separate point clouds for two environments. Please advise me on a good way to generate point clouds in multiple environments.
test_point_cloud_multiple_scences.py (8.8 KB)

Thanks a lot,
Bradley

Thanks for the report @yuan.truyenbao, I’ll take a look into this issue.

Hi @vmakoviychuk,

Thanks for your help! Did you have a chance to look into this issue and come up with a solution to fix it?

Have a great day,
Bradley

Hi, I’m responding to this thread because I have a similar issue. View matrices for cameras in different environments are different even when the environments are set up similarly. I think this may be because the view matrix returned by Gym.get_camera_view_matrix is in global space instead of env space.

Will this be fixed? I imagine for all use cases it would be better for the view matrix to be in env space. It would be best if the coordinate space could be configured with a parameter, and at very least the documentation should explain why the view matrices can be different in different environments.

Thank you.