Stero vision with IMX219 ISP camera by jetson nano

Hey guys, a quick query. I have two ISP cameras 1MX219 200-degree FOV. I tried stereo vision with USB cameras using Logitech on jetson nano. it completely works fine. I tried the same code with IMX cameras but I am running into problems as the cv2.imshow() method never shows up in the image and the process gets frozen.

the code for stereo vision I followed GitHub - LearnTechWithUs/Stereo-Vision: This program has been developed as part of a project at the University of Karlsruhe in Germany. The final purpose of the algorithm is to measure the distance to an object by combining two webcams and use them as a Stereo Camera.

also please mention articles and posts if this is already solved. i couldn’t find one. i found some examples with raspi v2 cameras.

This code uses V4L API for capture. Since IMX219 is a bayer sensor, this means the CPU driver will debayer and provide BGR format. This is a high load for Nano. Then it will convert into gray with CPU again. This is a moderate load for CPU as compared to debayering.

You can avoid loading your CPU and leverage HW resources in Jetson using a gstreamer pipeline for opencv capture.
So you would use nvarguscamerasrc for controlling your camera and debayering, then use nvvidconv for converting into GRAY8 that would allow you to directly read monochrome frames into opencv.
The github code uses video0 and video2, if you’re running 2 CSI cams I suppose your case is video0 and video1

CamR= cv2.VideoCapture('nvarguscamerasrc sensor-id=0 ! nvvidconv ! video/x-raw, format=GRAY8 ! queue ! appsink', cv2.CAP_GSTREAMER)   # 0 -> Right Camera
CamL= cv2.VideoCapture('nvarguscamerasrc sensor-id=1 ! nvvidconv ! video/x-raw, format=GRAY8 ! queue ! appsink', cv2.CAP_GSTREAMER) 

while True:
    retR, grayR= CamR.read()
    retL, grayL= CamL.read()
    # No longer need any conversion
    ...

@Honey_Patouceul. followed your suggestion and replaced the lines. in taking pictures for calibration. but when it comes to stereo calculations (https://github.com/SaiHaneeshAllu/Stereo-Vision/blob/bb7d4cff679f2ebba8b91daca8aa90848284c8ac/Main_Stereo_Vision_Prog_jetson_withoutFilter.py#L187)

the cv2.imshow() inline 273 shows up a blank window and the process gets frozen.

I ran stereo on just a single image and the disparity images are also not that great. am I missing something? or should I capture more images for strong calibration parameters?

Probably my answer lacks resolution and framerate. Note that you won’t be able to use very high pixels/s on Nano with opencv videoio and imshow. However, 2x 640x480@30 fps would probably work. You may try and adjust for your case.

[EDIT: The link you’ve shared is different from previous one. The new one uses remap(I guess for fisheye dewarping), and this would make a significant difference in processing…Be clear about what you wnat to do.]

hey @Honey_Patouceul,
yes, we used remap for the elimination of any distortion due to lens curvature. also, our brief progress goes like this.

our ultimate goal:- achieve stereo vision computation on jetson using IMX219 cameras for integrating obstacle avoidance. we had experience with a normal laptop on a robot running the code. but we want to shift to jetson hardware for a modular approach.
brief of our progress goes like this.

we want to achieve stereo vision using two cameras on jetson nano.
in Experiment 1, our configuration goes like this

2X Logitech cameras, i5 processor, windows10. OpenCV 4.3
Progress: calibration works fine. processing goes good and cv2.imshow() 
works fine. 

In Experiment2,
2X Logitech cameras, JETSON NANO, jetpack 4.3, OpenCV 4.3 progress: calibration works fine. but processing hangs. cv2.imshow() hangs up.

In Experiment 3,
2X IMX219 waveshare 200-degree FOV cameras. jetson nano. jetpack 4.3, OpenCV 4.3, progress: calibration ran into issues but resolved, processing and computing are in issues.

also, are there any resources available for jetson nano stereo computing apart from using intel real sense and USB cameras.

Don’t expect a Nano to challenge an multicore i5 CPU. It only has 4 A57 arm64 CPUs in best NVP modes, which is not so much.
On the other hand, Nano has a GPU and HW resources for scaling/converting/encoding/decoding that can make it efficient.

It also has iGPU meaning that same physical memory is shared among CPU and GPU, although this might be specifically used for better performance.

I’m unable to try now, but you may try:

  • capture as I proposed. You would get frames in monochrome 1-channel cv::Mats. Specify a resolution and framerate according to your cameras (I don’t have a multiple cams setup, so I can’t tell if this really works).
  • Then upload to GPU (you would have uploaded x&y maps beforehand as well).
  • Then use cuda::remap for unwarping into new cv::cuda::GpuMats
  • Then try disparity map (I can’t say now what is available from CUDA, I would have to try but no time for this now… try yourself and let us know.).
    If CPU only is available, you would have to download to CPU cv::Mats, or use unified memory and may have to call cudaDeviceSynchronize().
1 Like