What code example can be taken as a basis for the Jetson platform for stitching real-time video from three cameras.
If your usease is based on gstreamer, you can utilize nvcompositor plugin. Please refer to
Jetson Nano encoding frame rate issues - #3 by DaneLLL
what tools on NVIDIA JETSON can work with RAW video formats for perform video pre-processing, multi-view video alignment, color correction, panoramic video stitching and display area extraction.
If your source is Bayer sensor, we would suggest use ISP engine to get YUV420 frames and utilize hardware blocks for the processing. If you need to capture RAW frames, you would need to utilize GPU and implement the functions through CUDA. For capturing RAW frames into CUDA buffers, please take a look at the sample:
I try this example /usr/src/jetson_multimedia_api/samples/v4l2cuda/ but don’t see functions for “video pre-processing, multi-view video alignment, color correction, panoramic video stitching”. My be you knoun about code examples for panoramic video stitching from 3 cameras? Thanks.
The sample demonstrates capturing frames into CUDA buffers. For further functions, you would need to do implementation.