Run Multiple Computer Vision Cameras on one Jetson Nano board

Hello!

So in my lab we’re developing a custom sensor that is based on a computer-vision camera and OpenCV processing. We are using CUDA acceleration. As such, we’re trying to collect images from the camera onto a Jetson Nano at an ideal rate of 60fps at resolutions of at least 1280x720.

Using our custom application, we’re able to get one camera working at a reasonable rate. My question is:

Has anyone been able to run 2-4 cameras off of a Jetson Nano, and if so what is the best architecture to do so? Is there a sample application in C++ that runs multiple cameras at once, and performs OpenCV applications using CUDA gpu acceleration?

Hi,
Please refer to the samples to map NvBuffer to cv::gpu::gpuMat in gstreamer or jetson_multimedia_api:

[gstreamer]
Nano not using GPU with gstreamer/python. Slow FPS, dropped frames - #8 by DaneLLL

[jetson_multiemdia_api]
LibArgus EGLStream to nvivafilter - #14 by DaneLLL
NVBuffer (FD) to opencv Mat - #6 by DaneLLL

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.