I am having Jetson Xavier Nx, Two CSI IMX219 cameras.
And I have done with jetbot implementation using monocular vision and now I want to achieve it through stereo vision. Pls tell me the code and steps to perform stereo vision on jetbot Jetson Xavier nx and 2 CSI IMX219 cameras.
Please let me know as soon as possible.
I would suggest to use ZED USB camera for 3D stereo vision use case.
A long time ago, I implemented stereo vision manually on a moving robot with an small mini-ITX PC, and later ported it to a Jetson TX2.
The Basic idea is to:
- Make sure the cameras are in sync
- Build a discrepancy map between scanlines
- Use parallax and camera parameters to turn discrepancy map into measured parallax
- Turn the parallax into known-depth values
OpenCV has some pretty reasonable support for this – it was the bee’s knees in computer vision in the '90s!
However, part 1) is hard, for a few reasons:
- You need very well surveyed mounting hardware for the cameras, so they point in the same direction with a known center distance
- You need the cameras to be well calibrated. You can make a checkerboard pattern (ideally, with two different odd square counts – I think I used 7x9) in a bunch of different orientations, and run “checkboard detect” in OpenCV to work this up. You’ll need probably 30-40 different pictures from different areas of the viewport and different distances, to get enough data for this to work.
- You need the cameras to see the same thing. This means you probably want polarizing filters, because glare on ground and flat surfaces is highly specular and thus direction dependent. You also want good hoods around the lenses, so that side illumination doesn’t cause flares in one of the cameras.
- You need the cameras to take the picture at the same time. This was basically impossible with the USB cameras I was using. It worked fine when things were standing still, but when it moved, the world turned to jello.
The software part of it isn’t hard. There are several OpenCV stereo image tutorials on the internet (or, at least, was back when I did,) and as long as you understand image processing and Python/C++ coding, they were easy to follow.
If you can make all this work for your Jetson / IMX setup, then you can use the OpenCV package to make this work. Beware that it’s a lot of work, and then, unless you can somehow genlock the two cameras (the default Raspberry Pi camera modules are not gen-lockable) you will still have the “moving means the world turns to jello” problem.
That experience was what made me buy a Zed camera from Stereolabs, and they have solved all of these problems. It works pretty well! I highly recommend it.