Inference using 4 cameras

I would like to have inference on more than one camera and use the following:

  • 4 MIPI cameras as in sample 13_multi_camera - here we use NvBuffer Id to handle images

  • inference as in jetson_inference but on all 4 cameras - here we use uchar3 * image to process inference

I plan to use NvBufferMemMap() from nvbuff_utils.h to get the address of the memory-mapped plane as void ** .

But I’m unable to understand what would be the format of the void** memory + whether this address points to the GPU memory or not?

In case my approach is not correct, I would be glad to hear any advice.

Hi,
It is CPU pointer. Please refer to code about DO_CPU_PROCESS in 10_camera_recording sample.

For getting GPU pointer, you can check cuda_postprocess() in 12_camera_v4l2_cuda.

Hi DaneLLL,
Thank you for helping!
What would you advise to use for inference with multi camera?
I have experience with C++ but I’m confused which approach to use.
Any advice will be appreciated.

Hi,
An existing solution is to use DeepStream SDK. You can check the sample config files in

/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app

source1_csi_dec_infer_resnet_int8.txt is for single camera source. You may try it first, to make sure it can be run successfully. And then customize it to 4 sources. May refer to other config files for multiple sources.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.