Kinect V2 support

Hello,
has someone tried to connect the Kinect V2 to the jetson nano? I read in the previous topic that, with TX1 and TX2 at the beginning, there was some problem related to the firmware.

I’m also asking if the Nano can handle the stream rate of the camera and also do some computation or image analysis; like recognizing an object or track the position in the space with a sufficiently high frame rate.

thanks!

I also want to connect kinect v2 to nano ,is that possible?

Hi,
We support USB UVC driver by default in BSP releases. Looks like Kinect requires specific driver and we don’t have experience of running it. Other users may share experiences.

Getting Started with ROS on Jetson Nano – Stereolabs. here is a examples for deep camera connect with nano for vslam

Hi,
I did connect my Kinect v2 successfully to the Jetson Nano by:
a) following the instructions here [b]libfreenect2/README.md at master · OpenKinect/libfreenect2 · GitHub
b) using sudo ./bin/Protonect to start the test program.

The Nano was powered by a noname 5v 2.1a dc adapter - everything was fine.

Good luck.

Hey guys!
I’m using the JETSON TX 2 with the following configuration:
L4T 28.2.1 [ JetPack 3.3 or 3.2.1 ]
Board: t186ref
Ubuntu 16.04.6 LTS
Kernel Version: 4.4.38-tegra
CUDA 9.0.252

And I’d like to connect it with a Kinect v2, however, I didn’t have success. I’ve done everything that I read on the forums here on Github and on JetsonHacks, even tried to install as Kinect v1, but nothing. After all, I saw that it’s not possible to connect a default board (TX2) with this device. Is it true?

By the way, which camera is it better to develop SLAM applications? ZED?

Thanks in advance.

Hey… I got working kinect V1 with ROS OpenNI driver on jetson nano… Got navigation stack and gmapping working… but the problem is when i start ros control with gmapping, the laser scan data start to jump… after some Googling somewhere i saw that can happen due to lack of cpu power… is there anything i can do for that…
Thanks…

That’s great news that you got the Kinect v1 working on a Nano! I’m hoping to get it going myself. Can you point me to the tutorial you followed? I"ve got a three foot tall robot with a shiny new kinect on the front but i just can’t get it working on my rock64pro. i just got a nano to try but i’d read they were only compatible with kinect v2.

Regarding your question, powering the nano via the dc barrel connector will provide better performance.

If the problem turns out to be that he nano just can’t handle the point cloud (as is the case with the Pi3 i think) there is a ros program called point cloud to laser scan (or something really similar) that helps.

I just installed this ros package from following link and it was working fine…

http://wiki.ros.org/openni_launch

To install it please try following

sudo apt-get install ros-melodic-openni-launch

If you are working with ROS, you’ll not have any issue with that. After i installed that, kinect was listed in

lsusb

.

even if you are not working with ROS, openni driver may be able to help you.

As for my other problem, it was because two nodes publishing transformation for same tf link. After i stop one node, it was working fine.

Jetson nano handles pointcloudes well from kinect v1 and i was able visualize them on rviz… I was also able to run rtabmap_ros to create 3D map of my room with it. (It was my first time with rtabmap and really COOOOOOL !!!) ROS navigation also worked fine and i got some nice results with AMCL and ROS Control for differential drive robot.

i am running in 10w mode and even with my 5v 2.1A power bank everything working fine. just make sure you added a cooling fan. (I added a cheap 12v one) one time nano turned off i think because i forgot to turn on cooling fan and i ran it for some time. it was heated a lot. other than that, everything working fine and i’m really happy with jetson nano… I always wanted to get my hand on a jetson but they were out of my budget until now. Nvidia did a great job with this… Thank you Nvidia for making this available at this price… :)

I had one issue as sometimes when i visualize data in rviz like a pointcloud, nano stopped responding. if that’s a problem, just run rviz in an another machine connected to same ros master through wifi with all other processes running in jetson nano… But rtabmap 3d map generation visualized ok in rviz with a bit of lag.

If you need any help with any of there please ask me…

Sorry for the delayed reply.

Here is the thing i built… I’m on a budget :)

[url]ROS robot build.jpg - Google Drive

Hey guys!
I got it! I was worrying at use libfreecnet v2, but when I tried the full installation (libfreecnet + bridge_kinect2 + rtabmap) I’ve done.
By the way, is good to know that is possible to connect with other devices!

Thanks a lot!

Can you please help to check my issue at

https://devtalk.nvidia.com/default/topic/1057667/jetson-nano/-error-vaapirgbpacketprocessorimpl-vadisplayisvalid-display-failed/post/5363134/#5363134

Did you make your Kinect V2 work on Jetson Nano?
Can you please tell more details about how? And what packages have you installed on Nano for making it work?

Cheers
Pei

Are you sure you just followed the instructions ONLY https://github.com/OpenKinect/libfreenect2/blob/master/README.md#linux ?

I still failed to start the test program, please take a look at my ERROR messages?
https://devtalk.nvidia.com/default/topic/1057667/jetson-nano/-error-vaapirgbpacketprocessorimpl-vadisplayisvalid-display-failed/post/5363134/#5363134

Cheers
Pei

Here is a link to a video of the Jetson nano using the Kinect2 to run YOLO darkflow with depth data extracted and added to the YOLO model results:

Here is link to github repo on how to do it:

I strictly followed the documentation ALREADY. However, I still met the following ERROR messages:

lvision@lvision-desktop:~/Downloads/Kinect/libfreenect2/build$ sudo LIBUSB_DEBUG=3 ./bin/Protonect
Version: 0.2.0
Environment variables: LOGFILE=<protonect.log>
Usage: ./bin/Protonect [-gpu=<id>] [gl | cl | clkde | cuda | cudakde | cpu] [<device serial>]
        [-noviewer] [-norgb | -nodepth] [-help] [-version]
        [-frames <number of frames to process>]
To pause and unpause: pkill -USR1 Protonect
[Info] [Freenect2Impl] enumerating devices...
[Info] [Freenect2Impl] 9 usb devices connected
[Info] [Freenect2Impl] found valid Kinect v2 @1:7 with serial 178602434347
[Info] [Freenect2Impl] found 1 devices
[Error] [VaapiRgbPacketProcessorImpl] vaDisplayIsValid(display) failed
[Info] [Freenect2DeviceImpl] opening...
[Error] [protocol::UsbControl] failed to claim interface with IrInterfaceId(=1)! LIBUSB_ERROR_BUSY Resource busy. Try debugging with environment variable: export LIBUSB_DEBUG=3 .
[Info] [Freenect2DeviceImpl] closing...
[Info] [Freenect2DeviceImpl] deallocating usb transfer pools...
[Info] [Freenect2DeviceImpl] closing usb device...
[Info] [Freenect2DeviceImpl] closed
[Error] [Freenect2Impl] failed to open Kinect v2: @1:7
failure opening device!

“To test that everything works run:sudo ./bin/Protonect. You should get a video and depth streams. This install was preaty straight forward.”

I failed to run libfreenect2 Protonect on Jetsonn Nano…

Alright… Problem found and solve…
It’s ALL about the USB cable… You really need a VERY STRONG/THICK USB cable…

Cheers

Can also confirm all good with libfreenect2 and Jetson Nano with KinectV2 per instructions

Had an issue with the make script (libGL.so not found), but updated the broken symlink and all was good

Original:
cd /usr/lib/aarch64-linux-gnu
ls -lrt libGL.so
libGL.so → /usr/lib/aarch64-linux-gnu/tegra/libGL.so

Changed to:
sudo rm libGL.so
sudo ln -s libGL.so.1.0.0 libGL.so

Using a rasberry pi 2 Amp/5V micro USB adapter - works fine