Has anyone successfully created RGBD maps with RTABMAP and kinect2_bridge?

Hi

I am trying to create RGBD maps on my mobile robot using the kinect V2, kinect2_bridge and rtabmap, but have been running into issues for weeks.

The mapping does work, but I am having issues with image synchronisation. Working with matlabbe I have been slowly troubleshooting, but still this issue remains.

kinect2_bridge is built from the standard github repo. I know jetson hacks had one for the TX1, but I was under the impression this was no longer needed and the standard repo would be ok. Is this the case?

Secondly I know that the TX2 does not support OpenCL. I have built libfreenect2 with CUDA and I am launching kinect2_bridge with CUDA as depth processing. However it of course defaults to CPU for the depth registration as usually it would look for the unavailable OpenCL.

I also have a lidar and odometry, which are recommended for more accurate maps.

Has anyone actually been able to get the TX2 to produce accurate RGBD maps on a mobile robot? Does it have the processing power to work with the depth clouds, or do a few tweaks need to be made?

Hi,

Have you tried ROS?
RTABMAP should be working with ROS on Jetson: [url]https://github.com/introlab/rtabmap_ros[/url]

Thanks.

Hi,

Yes I am using ROS and am able to generate maps. The problem is that the depth / rgb synchronisation is poor despite working with the author of rtabmap to improve it from the software side. We have been unable to do so.

A theory is that the TX2 does not have the processing power to produce clouds fast enough, and this seems to be borne out by the very laggy speed and distortion you can see when just using the kinect2_bridge viewer and moving the kinect. It publishes at around 2Hz on the TX2, whereas a PC would be up around 30. I know the TX2 isn’t meant to be able to do what a PC can of course.

I am already using ./jetson_clocks.sh and CUDA, however I realise that the TX2 does not support OpenCL. I just wanted to know if anyone has found a way to get more performance out of kinect2_bridge on the TX2?

Hi,

Our scaling partner can reach realtime performance for RTAB-Map with TaraXL USB Stereo Camera.

In general, the computation for stereo should be larger than a rgbd camera.
Suppose there are still some zoom for optimizing.

Sorry that we don’t have other material can share with you.
Maybe other user can comment on this.

Thanks.

Interesting, is that with hand held camera? I have my kinect2 mounted on a robot with lidar and odometry, which is supposed to produce the best possible quality of map! Unfortunately is not working out that way.

Are you using visual odometry in your mapping?

Thanks

Looks like it was the board speed after all. The TX2 was only able to publish the rectified rgbd clouds at 4Hz, which just wasn’t enough for rtabmap to make decent maps.

I switched to my laptop and its putting out 30Hz with the same settings, and the maps are way better. Looks like kinect2_bridge and qhd clouds are just a stretch for the TX2.

Cheers for helping

Hi,

Just in case you do know this.

Have you maximized the device performance first?

sudo jetson_clocks.sh

Thanks.

Hi AastaLLL

Yes, I use nvpmodel mode 0 and also use jetson clocks. It was enough to double the publishing frequency from 2-3Hz to about 5-6 Hz, but sadly not enough for good mapping.

SD clouds do publish much faster, but there is something about the way compression is handled in the ros throttling node for SD clouds that causes lots of noise and “black” points, so qhd is the only option.

Another issue is that kinect2_bridge is built to run depth registration best in opencl (depth processing is handled via CUDA which works ok), so with that eliminated as an option the CPU load is probably the real killer.

Thank you for trying to help!

Hey guys!
I’m using the JETSON TX 2 with the following configuration:
L4T 28.2.1 [ JetPack 3.3 or 3.2.1 ]
Board: t186ref
Ubuntu 16.04.6 LTS
Kernel Version: 4.4.38-tegra
CUDA 9.0.252

And I’d like to connect it with a Kinect v2, however, I didn’t have success. I’ve done everything that I read on the forums here on Github and on JetsonHacks, even tried to install as Kinect v1, but nothing. After all, I saw that it’s not possible to connect a default board (TX2) with this device. Is it true?

By the way, which camera is it better to develop SLAM applications? ZED?

Thanks in advance.

Hi @leviresende

I am not sure what you mean by board, but I had success connecting a kinect v2 to the regular TX2 that you buy from nvidia. I used kinect2_bridge to generate the depth clouds, which relies on freenect 2. In turn that will require the standard CUDA package installed with Jetpack. Unfortunately you cannot use OpenCL for depth registration and you will need to use CPU registration. CUDA can still run the depth processing however.

I have heard that ZED has advantages in outdoor environments. In my experience using the kinect V2, to a certain degree I would be more inclined to use the v1 as there are less overheads and it seems to produce better quality clouds. I am assuming this is because it uses a different means of finding the depth image (structured light).

Some people have had success with the nvidia depth cams, but I found they had too much noise for my application. There is a wide range of cameras available now, if only I had the money to try them all!

Hey @RoboRoss

First of all, I’m sorry about the term “board”, I misused it … I meant regular TX2.
I got it! And I’ve done it, however, before I installed CUDA that I’m currently using.
What did you make to connecting it? Did you follow the steps on Git? Or anything else? As I said before, in my case, even following that, I didn’t get it after the update my CUDA for the actual version: 9.0.252.
About the cameras, thanks for your advice. Now I’ll try with a ZED because I have one here.
Thanks!

Hi

If I remember rightly it did take a fair bit of experimentation to get it working correctly.

Watch carefully when building freenect 2, as you will need to invoke the correct options for building with CUDA. Instructions you have probably seen already are here https://github.com/OpenKinect/libfreenect2/blob/master/README.md#linux but its easy to get lost!

You also need to follow the instructions for cmake to allow third party applications to make use of freenect2, as well as building with c++11 options enabled. If I remember rightly you can invoke the options from the cmakelists.txt (found in the top level directory of the freenect2 folder) or add them as parameters when you run cmake …

Allowing third party:

cmake .. -Dfreenect2_DIR=$HOME/freenect2/lib/cmake/freenect2

Invoking c++11

cmake .. -DENABLE_CXX11=ON

Also from the kinect2_bridge instructions is an option for building with CUDA

cmake .. -DENABLE_CXX11=ON -DCUDA_PROPAGATE_HOST_FLAGS=off

but I used cmakelists.txt as mentioned above as it lets you catch the CUDA ON and OpenCL OFF flags more easily. The section with the options around line 34 should look like this:

SET(MY_DIR ${libfreenect2_SOURCE_DIR})
SET(DEPENDS_DIR "${MY_DIR}/depends" CACHE STRING "dependency directory must be set to 'false' if external deps are used")

OPTION(BUILD_SHARED_LIBS "Build shared (ON) or static (OFF) libraries" ON)
OPTION(BUILD_EXAMPLES "Build examples" ON)
OPTION(BUILD_OPENNI2_DRIVER "Build OpenNI2 driver" ON)
OPTION(ENABLE_CXX11 "Enable C++11 support" ON)
OPTION(ENABLE_OPENCL "Enable OpenCL support" OFF)
OPTION(ENABLE_CUDA "Enable CUDA support" ON)
OPTION(ENABLE_OPENGL "Enable OpenGL support" OFF)
OPTION(ENABLE_VAAPI "Enable VA-API support" OFF)
OPTION(ENABLE_TEGRAJPEG "Enable Tegra HW JPEG support" OFF)
OPTION(ENABLE_PROFILING "Collect profiling stats (memory consuming)" OFF)

If you set some of these differently it may not make a difference, as kinect2_bridge will try the different available options when launching. So if you had OpenCL set to on, k2b might try to run with opencl but then see it cannot on the TX2, and default to cpu. Just make sure that CUDA and C++11 are on. I believe the build samples flag needs to be ON too. If things do not work, experimenting with these flags can help.

Unfortunately the support for kinect2_bridge is not stellar and I have similarly not had much response from libfreenect2 page either, so you may just have to persevere. It definitely does work on the TX2.

I am assuming that you have CUDA built and running on the TX2, you can check its working by following the verification steps here https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#verify-installation .

Hope this helps!

Hey @RoboRoss

Thanks a lot about that!
Now, I compiled the library, however, the new issue that I’m facing it seems as USB problem. I posted all my tests following the instructions in this link:

Anyhow, I’m thinking about give up this camera. Yesterday I was using ZED Camera, and I didn’t have any problem.

I appreciate your help, and I wish you the best.

Levi

Hi Ross,

I have been trying to get rtabmap_ros working on a Jetson TX2, using a kinect V2. I first tried this using the Kinect V2 to provide the odometry info, however the results of the resultant 3d map were messy (but not completely awful) presumably because the precision of the Odometry readings isn’t so good from rtabmap. To improve the situation I tried to use my own external Odometry but unfortunately I couldn’t get it working at all, as the image frames don’t seem to be synching with the odometry signals. Before stumbling onto your post I tried tuning various different rtabmap parameters and also reducing my odom publishing frequency but without any success.

Currently the publishing rates are…

/odom 19.4
/kinect2/qhd/image_depth_rect 5.5
/kinect2/qhd/camera_info 14.9
/kinect/qhd/image_color_rect 2.2

Is this similar to your own experience? where you ever able to get this working using the kinect V2 and the Jetson TX2? If not were you able to get rtabmap_ros working using another sensor (for example the Realsense or Zed 2), with a Jetson TX2 and external Odometry and get decent results?