ORB_SLAM2 cuda enhanced running on a TX2

Been bashing on this since my TX2 came in. Based on ORB_SLAM2 with GPU enhancements by Yunchihi. Since it uses ROS for input/output I had to change out the ROS opencv packages with cuda enabled ones. Now that is a job and hard to figure out as it uses a pseudo file system to put those libs at runtime and it disappears if the program throws an error. However there is a way to get it done with a little trickery. Once you have a cuda enabled opencv 3.1.0 and have changed out the ROS opencv libs the build is pretty straight forward. A few other deps Pangolin libblas and liblapack I want to get it to use cublas instead of libblas that should speed it up a bit more. With the Zed there are other enhancements you can make. Since the Zed does rectified images and you only need left and right for ORB_SLAM2 the rest can be culled from the ros_wrapper to save some cycles. This also allows you to turn off rectification in ORB_SLAM2 to speed things up. However as you can see in this vid the TX2 in mode 0 is quite capable.

[url]ORB_SLAM2_GPU - YouTube

Hi Dan,
Thank you for sharing your explorations.
I was struggling on getting the SLAM to function at TK1 - and it was all issues. Somehow hopefully finally resolved, but it got to run for a moment and got halt forever.
I will try to reach the same objective now with ZED and TX.
Could you provide more instructions on how to get the thing to work?
As I understand one point there is to get ROS to use CUDA enabled opencv packages. The version of opencv seems to be 3.1.0 which is required. But what changes to ROS opencv libs are required?
Could you share a raw disk image of a TX2 where the environment is configured as you described above?
e.g via dd if=/mmcblk0 | ssh user@jetsonaddress.com dd of=/mmcblk0?
or via ftp upload or dd upload to a file
Thanks
Andrey

The ROS replacement part is on one website I could find and its in Japanese. The translation isn’t that good but if you hover over the command lines it takes all the garbage out. ROS KineticでOpenCV3 x CUDA 8.0RCのロボット用GPGPU環境を構築する - Qiita I used this page to build opencv with cuda enhancements. OpenCV: Building OpenCV for Tegra with CUDA be sure and use the correct cmake commands there are 3 sets of them. Also do it on 3.2 the source you will get for the ROS part is 3.2 now and ORB_SLAM2 will work with 3.2 just change the CMakeLists.txt file to the 3.2 version. There is one file in the ORB_SLAM2 package that needs to be patched. In the 3rdparty/g2o/g2o/solvers you need to edit linear_solver_eigen.h line 56 from this

typedef Eigen::PermutationMatrix<Eigen::Dynamic, Eigen::Dynamic, SparseM atrix::Index> PermutationMatrix;

to this

typedef Eigen::PermutationMatrix<Eigen::Dynamic, Eigen::Dynamic, int::Index> PermutationMatrix;

Be sure and add the package path to ROS_PACKAGE_PATH or it will never build. And don’t include ORB_SLAM2 at the end leave it at ROS. You will need to get the source for Pangolin and build that. I start with a fresh flash and tell it not to install OpenCV4tegra you can always install it later if you need it. Set up Opencv to install in /usr/local in case you do need to install nvidia version later. libblas-dev and liblapack-dev will need to be installed before trying to build ORB_SLAM2. Jetsonhacks.com has the opencv part on his site a nice set of instructions and scripts to build it.

You don’t need to do the patches it has on that page for opencv if you use 3.2. Just change the checkout commands to -b v3.2.0 3.2.0 It builds on the TX1 and TX2. The build of opencv3 for ROS takes the longest. Be sure and change the cuda architecture to 6.3 if its a TX2 or it won’t use the gpu as effectively. You can use the build.sh script in the ORB_SLAM2 directory to build it. But the gpu version doesn’t have the build script for the ROS part so you will have to get that from the original git page. The source for the gpu version is here. GitHub - yunchih/ORB-SLAM2-GPU2016-final That one was hard to find. I saw a video of it running but it didn’t give any further information. Just for grins I searched on youtube for it and the link was in the info. Really increases the performance although there are still lots of areas that could use enhancement that don’t really take to being enhanced very well. Disparity maps are like that. You can use a gpu but other methods don’t work with them. The ZED does disparity maps on the TX2 with Jetpack3 at 42hz which is decent. 720p/60 input too not 640x480. Funny though when you put it in model 0 the frame rate for disparity maps actually slows down about 4fps. I’m thinking at full speed that Pascal is running out of data to munch. Even with the Denver2’s in there to feed it. A beast.

@danpollock I think you meant to say ‘cuda architecture 6.2’

@Dan
Thank you for your response.
It seems that it would take a while and the opencv version should be taken the 3.2 version and the cuda version should be used the 6.3 version. Also there are peculiarities of build of libraries, of add of a path, as i understand.

Changing of checkout commands stands for git checkout s i understand.

Thank you for sharing the link. I will try to build the thing according to your advises.

Let me know if there would be a chance to download a raw disc copy from you.

Thanks,
Andrey
Developer,

I will try to write steps there
The goal: to install software to run ORB_SLAM at tx2;
The environment: remotely accessible tx2 with zed_camera. Fresh install, no opencv, no cuda installed;

Preparations:

  1. mkdir tx2
  2. cd tx2
  3. git clone GitHub - stevenlovegrove/Pangolin: Pangolin is a lightweight portable rapid development library for managing OpenGL display / interaction and abstracting video input.
  4. git clone GitHub - opencv/opencv: Open Source Computer Vision Library
  5. git clone GitHub - yunchih/ORB-SLAM2-GPU2016-final

to be continued
II) Jetson tx2 flashed as it was discovered that is was flashed previously in a wrong way
to be continued

Hi everyone. I know it’s little bit late but thanks to Dan’s instructions I managed to get the ORB-SLAM2 with GPU enhancements up and running on TX1, tested with monocular camera: https://www.youtube.com/playlist?list=PLde9NsDtSVwZNb_pPyKm5eOPk86x_9Yk1
I published my project here GitHub - thien94/ORB_SLAM2_CUDA: ORB_SLAM2 with GPU Enhancement running on NVIDIA Jetson TX1. Focus on ROS part. including the steps to build and how to solve a few issues, so that anyone with the same interest can test it out, without going through all the troubles.
Also I added a ROS publisher for tf, pose, camera current frame and pointcloud. All are included in the repo.

Thanks Thein. I’ve moved on to other things. DNN is my world now.

I just built Thein’s github and integrated it into the rest of my rover software. Sometimes a little slow to initialize. I found that you need to make sure your camera is level when you mount it. Much faster inits if its nice and level.

Man that changeover to cuda enhanced opencv in ROS is tricky. I’ve done it a few times and it always trips me up seems like. And the versions of opencv. ORB_SLAM seems to like 3.1 with cuda but the ROS version is up to 3.3.3.1 now. Doesn’t seem to be an issue but I tried 3.2 to get closer to the ROS version and it didn’t want to work for me so I went back to 3.1 Use this page for building opencv with cuda on the tegra it works great. OpenCV: Building OpenCV for Tegra with CUDA

One issue I found was the CMakeLists.txt in the Examples/ROS/ORB_SLAM2_CUDA directory needs to be changed so the ROS programs link against the cuda enhanced lib. Line 60 add _CUDA to the library name.

Hi,

We don’t find incompatibly issue among OpenCV3.x.
Most of APIs are identical and shared. You can upgrade an application to OpenCV3.3 with few effort.

Is your issue fixed after the modification in CMakeLists?
Thanks.