I’m working on an application to use the Visual Odemetry gem from a Realsense D435 for SLAM. Is there a way to either use the realsense depth data with cartographer, or use G-map with the data from the realsense to do this? So far I haven’t been able to find a solution without a Carter robot.
RealsenseCamera codelet wraps the output of the camera and emits
depth as a
DepthCameraProto which you could create an edge to any component, including yours. The
Cartogpher codelet, however, assumes a planar LIDAR scan and not an arbitrary depth map.
The Visual Odometry gem (doc) works with a raw stereo pair (no depth channel).
Could you tell me a bit more about your use case? You want to use a RealSense camera for mapping and navigation instead of a LiDAR basically?
Yes, that’s exactly what I’m trying to do. I’m trying to work with the DepthImageFlattening component to convert the depth to a flatscan, and then pipe that (as well as the odometry data from VO gem) into cartographer. I have no experience with cartographer, so is that the right approach?
@snimmagadda for more insight. I have not tried it myself yet, but it seems like it could work in theory. Intel claims that the RealSense cameras can be used with Cartographer in this manner, but I have not seen a working example yet.
The Cartographer codelet passes the 2D flatscan to Cartographer and queries the pose “odom_T_lidar” for odometry as a prior best-guess as to how to register the 2D flatscan and stitch it together with the others. The DepthImageFlattening component synthesizes laser scan angles from the depth image as if a laser had been used to generate it, and with the right parameters, you could probably produce a reasonable 2D flatscan. Using the VO component to produce odometry good enough (smooth at least) for Cartographer may be possible.
Both of these components will be extrapolating input data as if it came from a real LiDAR and IMU for Cartographer, which was itself not designed for VSLAM. This will reduce the quality of what Cartographer can stitch together as a map, perhaps unacceptably so.
If you or anyone else does get this to work, it would be great to post how you did it.
I have found a solution to this; however, I unfortunately have to abandon this project for the time being. The code is in a rudimentary state, is not maintained, and the cartographer parameter values are not tuned. But I’ve attached the repo that contains applications for gmap and cartographer using only the realsense camera with no external odometry.
Using visual odometry (which gives the camera pose) I am able to derive the odometry data of the camera. I then inject that data into the SLAM module. The Lidar data is flattened out of the depth frame. If anyone else uses this, feel free to reach out with any questions.