Poses, cameras and IMU's

I’m attempting to set up GMapping on my differentially steered robot.

I include the subgraph differential_base_gmapping and connect my lidar output to its lidar input, my wheel odometry base state to its base state input, and my ZED2 camera IMU to its raw_imu input. I change the lidar frame in the gmapping to match what my lidar frame is named. I then run the app and attempt to map the office. It can map fine when moving linearly forward, but any rotation skews the map from then on, and it can not recognize where it is even when going back to previously visited places.

What gives? I don’t think my wheel odometry is very good for rotation, but the Zed IMU should be great. I think it has to do with the IMU data being produced in the camera coordinate frame, whereas the odometry data is produced in the robot frame. In accordance to this info, these frames are wildly different, which can be semi confirmed by observing the perceived rotational velocity when rotating the robot (the IMU reports rotation about the y-axis, which corresponds to the z-axis of the robot).

This is very problematic, and I can not find a way to easily convert a pose from one coordinate frame into another. Perhaps a codelet would be useful, and if there is none, some assistance in writing one would be great. Preferably I should not have to do this conversion at all, since the IMU knows what it is connected to, and how it relates to the robot. I also can’t seem to find a way to view the values of poses in the pose tree, which would also be helpful. (ie click two poses to see their relative transform)

Any lead in solving this would be great.