CuVSlam system and Robot position prediction accuracy

I have been exploring your CuVSlam system and noticed its use of keypoints for robot position prediction, which seems quite similar to the methodology used in ORB SLAM3. In ORB SLAM3, they offer a configuration for adjusting the ‘ORBextractor.nFeatures’ parameter to increase the number of features in an image, thereby enhancing the algorithm’s accuracy in dynamic enviroments. ORB_SLAM3/Examples/Stereo-Inertial/RealSense_D435i.yaml at master · UZ-SLAMLab/ORB_SLAM3 · GitHub

Question 1: However, I couldn’t find a similar option in the API list.

Hi @Ettikan

CuVSLAM and ORB Slam v3 are two different SLAM algorithms. CuVSLAM is designed to work with a stereo camera, while ORB Slam v3 operates with a monocular vision system. Additionally, in cuVSLAM, there is no option to adjust the number of features extracted. However, you can find detailed information about the feature extraction algorithm utilized in cuVSLAM in our documentation, which can be accessed via cuVSLAM — isaac_ros_docs documentation.

Other reference: