april tag pose in relation to the robot pose

Hi,

We are trying to translate the pose given of an identified april tag to that of the robots pose, but the reading seem to be innacurate and have a lot of devaition in x,y and the rotation. What is the best way to find get the april tag pose in relation to the robot pose?

1 Like

perception::AprilTagsDetection should provide FiducialListProto, which contains a list of april tag detections including camera_T_tag (please check messages/ folder for the definition of FiducialListProto).

You then need robot_T_camera to do:

robot_T_tag = robot_T_camera * camera_T_tag

robot_T_camera is robot configuration specific. Please check what the camera frame looks like at https://docs.nvidia.com/isaac/isaac/packages/perception/doc/coord_frame.html#camera-coordinate-frame

I understand the translation is robot_T_tag = robot_T_camera * camera_T_tag, I think everyone knows this. There are many math files with helpers in isaac. Can you please give an actual example so the next person that reads this doesn’t struggle with the same issue. you can even use kaya as an example.

  1. config of camera pose in relation to robot (this is kayas):
    “2d_kaya.kaya_hardware.camera”: {
    “realsense_pose”: {
    “lhs_frame”: “robot”,
    “rhs_frame”: “camera”,
    “pose”: [0.406, -0.579, 0.579, -0.406, 0, 0, 0.22]
    }
    },

  2. An actual code example of taking a tag pose and translate it to the robot pose using the isaac math libraries and how you are handling the differences in frame orientation.

Thank you

Also another question on this as im looking at it, does the pose initialization for robot_T_camera handle the differences in frame orientation?

Perhaps I’m not getting the issue, but here are the steps:

  1. Read camera_T_tag from message (as explained in https://docs.nvidia.com/isaac/isaac/doc/engine/components.html#receiving-messages):
    auto fiducials_proto = rx_fiducials().getProto().getFiducialList();
    // Assuming fiducials_proto.size() > 0
    auto fiducial = fiducials_proto[0];
    Pose3d camera_T_tag = fiducial.getCameraTTag();

  2. Assuming it is set using PoseInitializer (like the config you shared), read robot_T_camera from pose tree (as explained in https://docs.nvidia.com/isaac/isaac/doc/engine/components.html#poses):
    const Pose3d robot_T_camera = get_robot_T_camera(getTickTime());

  3. robot_T_tag = robot_T_camera * camera_T_tag

robot_T_camera should be set considering differences in frame orientation. Check the two PoseInitializer components in apps/samples/follow_me/follow_me.app.json. lidar_initializer is set to have no rotation, but zed_left_camera_initializer handles the differences in frame orientation.

So here is likely a dumb follow up question, but I just need some clarification. When doing robot_T_camera * camera_T_tag, the following error occurrs:

no operator “*” matches these operands – operand types are: const isaac::Pose3d * Pose3dProto::Reader

My assumption is camera_T_tag needs converted to Pose3d. Is there something I’m missing here? Should this be able to work outright as outlined above?

My camera has a pitch (based on x axis forward and y axis to right)of 30 degrees with an x, y, and z offset. I set up my pose initializer as such:

“camera_pose_initializer”: {
“camera_pose”: {
“lhs_frame”: “robot”,
“rhs_frame”: “camera”,
“pose”: [0.9660, 0.0, 0.0, 0.2588, 0.216, 0.0025, 0.692]
}
}

qw and qz should be my pitch converted to quaternion with heading and bank = 0. I appreciate all feedback.

The error indicates that your camera_T_tag variable is of type Pose3dProto::Reader. You can get Pose3d variable using:

const Pose3d actual_camera_T_tag = FromProto(your_camera_T_tag);

Please have a look at messages/math.hpp for the implementation of

inline Pose3d FromProto(::Pose3dProto::Reader reader);

Exactly what I needed. Thanks.

Going a little off of what Fedora labs was seeing, I believe I’m seeing similar “inconsistencies”. This may be to me setting up the data wrong. Going off of from above, my camera rotates towards the ground 30 degrees looking out the front of the robot.

Just to get an idea of how I have understand the orientation, looking from behind the robot forward:
the robot frame is x = forward, y = right, and z = down and
from what I understood from the docs, the camera frame is x = right, y = up, and z is inward (towards back of robot). Is the robot_T_camera pose initalizer setup in the camera or robot frame?

Also, the value for robot_T_tag returns values from what appears to be a tag point of view. For example, I expect a fiducial to be 2m from the robot (no deviation in side to side or height), which would be +2m on the x-axis of the robot (2,0, 0), but I get ~+2m from the z value for robot_T_tag (0, 0, 2). Do I just need to apply a couple 90 degree rotations about the axis?

Also, if I leave the robot in place and orient left or right, z will stay pretty much the same, but I will see both x and y change pretty significantly. I would really expect neither to change and just my orientation to change. My only thought is I applied the 30 degree pitch of the camera incorrectly to my pose initializer. I appreciate any insight to any incorrect interpretations.

Please see https://docs.nvidia.com/isaac/isaac/packages/perception/doc/coord_frame.html for the frames. In particular, y is left, not right for the robot and y is down, not up for the camera.

Please see https://docs.nvidia.com/isaac/isaac/doc/component_api.html#isaac-alice-poseinitializer for the pose initializer. If lhs_frame is “a” and rhs_frame is “b”, you are setting a_T_b.

Have a look at messages/fiducial_list.capnp. AprilTagsDetection outputs poses in camera frame (thus named “cameraTTag”). So, (x, y, z) = (0, 0, 2) would correspond to 2 meters away from the camera.