Trying to obtain support/help on why real-time video stitching resolution is low when stitching two camera feeds into one, slow frame rate return, and how to improve. Using the Jetson Nano B01 and two IMX219 8MP camera modules. At 640x360 resolution the frame rates are really low at about 6FPS, and at even lower 160x100 resolution the frame rates come up to roughly 30FPS. Any help in solving this problem and getting much better frame rates at higher resolution would be greatly appreciated. Thanks in advance.
there’s one thing you should note about, it’s pixel format types.
assume you’re working with Raspberry Pi camera v2, IMX219, it’s output with RGGB bayer formats.
there’s format conversion must included since it’s not a format types OpenCV supported,
$ v4l2-ctl -d /dev/video0 --list-formats-ext ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'RG10' Name : 10-bit Bayer RGRG/GBGB Size: Discrete 3264x2464 Interval: Discrete 0.048s (21.000 fps) ...
you may include video converter, i.e.
nvvidconv to convert the sensor formats as BGRx,
please also access L4T Sources package, there’re samples of OpenCV + gstreamer, i.e.
Thanks a lot @JerryChang ! I will give this a go and let you know how it goes. As a follow-up and for clarification, is the Jetson Nano powerful enough and does it have the necessary performance to produce real-time stitching at higher frame rates with much better resolution? The IMX219 module we’re using is capable of Frame Rates: 21fps@8MP, 60fps@1080P, 180fps@720P. Our target is at least 60fps@1080p.