4.2.0
4.1.0
4.0.0
2023.1.1
2023.1.0-hotfix.1
Other (please specify):
Operating System
Ubuntu 22.04
Ubuntu 20.04
Windows 11
Windows 10
Other (please specify):
Topic Description
I am trying to publish the 3D pose of an object relative to the camera frame. I’ve tried using the “Isaac Real World Pose” OmniGraph node but it is unclear to me how I then extract and process the resulting pose. I see nodes for mathematical operations like invert and multiply matrix, but I don’t understand how I take the output of the get pose node for both the camera and object prims and then do the matrix multiplication to get the object pose relative to camera.
I had taken a look at the link but it is not exactly what I am looking for. I want to find the pose of an object relative to camera frame. What I’ve tried is:
Get world pose of object
Get world pose of camera
Find inverse of camera world pose
Multiple inverse of camera world pose by object world pose
This ends up giving me some translation values that do not make any sense. Before inverting and multiplying the matrices look correct for translation in relation to world.
stage = omni.usd.get_context().get_stage()
object_prim = stage.GetPrimAtPath("/World/ParentA/ObjectA")
# Row-major transformation matrix from the object's coordinate system to the world coordinate system
obj_world_transform_matrix = UsdGeom.Xformable(object_prim).ComputeLocalToWorldTransform(Usd.TimeCode.Default())
camera_prim = stage.GetPrimAtPath("/World/Robot/Camera1")
# Row-major transformation matrix from the camera's coordinate system to the world coordinate system
camera_world_transform_matrix = UsdGeom.Xformable(camera_prim).ComputeLocalToWorldTransform(Usd.TimeCode.Default())
# Convert transformation matrix to column-major
obj_to_world = np.transpose(obj_world_transform_matrix)
camera_to_world = np.transpose(camera_world_transform_matrix)
camera_to_world_inv = np.linalg.inv(camera_to_world)
obj_to_camera = camera_to_world_inv * obj_to_world