Run Multiple Computer Vision Cameras on one Jetson Nano board


So in my lab we’re developing a custom sensor that is based on a computer-vision camera and OpenCV processing. We are using CUDA acceleration. As such, we’re trying to collect images from the camera onto a Jetson Nano at an ideal rate of 60fps at resolutions of at least 1280x720.

Using our custom application, we’re able to get one camera working at a reasonable rate. My question is:

Has anyone been able to run 2-4 cameras off of a Jetson Nano, and if so what is the best architecture to do so? Is there a sample application in C++ that runs multiple cameras at once, and performs OpenCV applications using CUDA gpu acceleration?

Please refer to the samples to map NvBuffer to cv::gpu::gpuMat in gstreamer or jetson_multimedia_api:

Nano not using GPU with gstreamer/python. Slow FPS, dropped frames - #8 by DaneLLL

LibArgus EGLStream to nvivafilter - #14 by DaneLLL
NVBuffer (FD) to opencv Mat - #6 by DaneLLL