Feeding RAW camera data directly to CUDA

Is there a way to get RAW data from a camera (bypassing the Orin ISP) directly into a CUDA pipeline, without context switches to wake up userpace code waiting on a V4L2 file descriptor?

It looks like EGL streams can do this, but I don’t see any way to get RAW camera data into an EGL stream on Jetpack-based systems. It looks like Drive OS has several ways to do this, but I don’t see how to do it with Jetpack, where it seems the only API for reading RAW camera data is V4L2.

hello brian100,

please refer to MMAPI sample, such as 12_camera_v4l2_cuda for demonstration.

I see that example uses VIDIOC_DQBUF, which means userspace waits for each frame and then the GPU cannot start processing until userspace sets up more processing. Is there any way to do it without those extra context switches?

hello brian100,

may I also know what’s the actual use-case, for instance, please share your expectation or final goal for sending RAW camera data directly to CUDA.

Hi,

I have an RGBIR camera, which is not supported by the ISP, so I’m going to do ISP-style work in CUDA. I would like to do this with the best performance easily achievable. Keeping a single buffer which the camera hardware writes to and CUDA reads from is straightforwards. From the TRM, it looks like the host controller should be able to trigger the GPU to execute some CUDA code with sync point without involving userspace or kernelspace code on the main CPU complex. It looks like Drive OS has APIs that achieve this, so I’m asking if there is one for Jetpack.

hello brian100,

please download MMAPI, $ sudo apt install nvidia-l4t-jetson-multimedia-api
you may see-also Argus sample, /usr/src/jetson_multimedia_api/argus/samples/cudaBayerDemosaic/ for demonstration.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.