I’m using an AGX Xavier with Jetpack 4.6 (MMAPI 32.6.1). I have GMSL cameras which use the V4L2 drivers. I have studied and used the MMAPI sample 12_camera_v4l2_cuda to capture frames from the cameras and process in CUDA.
I need to do this for multiple cameras simultaneously. Using pthreads, I have been able to capture from each camera and process them in separate threads. However, I am not able to figure out how to access a frame from each thread to perform CUDA operations in the main thread.
I tried studying the 13_multi_camera sample, however since it is for libargus cameras, I am not able to use it for my v4l2 cameras.
I would appreciate any pointers in figuring this out.
Hi,
For the use-case, you can create a queue to put captured frames and get the frames in main thread for processing. And a mutex for enqueue/dequeue protection. There is similar code in backend sample. Please take a look.
Thanks for the suggestion. I had the same idea. In the backend sample, we get NvBuffer object directly from the decoder in the callback, which can be pushed into a queue.
However, when capturing from the camera (as in 12_camera_v4l2_cuda), after NvBufferTransform we only have the dmabuf_fd. How can I get a NvBuffer object from just the existing dmabuf_fd so that I can push it into a queue?