Is there a way to get RAW data from a camera (bypassing the Orin ISP) directly into a CUDA pipeline, without context switches to wake up userpace code waiting on a V4L2 file descriptor?
It looks like EGL streams can do this, but I don’t see any way to get RAW camera data into an EGL stream on Jetpack-based systems. It looks like Drive OS has several ways to do this, but I don’t see how to do it with Jetpack, where it seems the only API for reading RAW camera data is V4L2.
I see that example uses VIDIOC_DQBUF, which means userspace waits for each frame and then the GPU cannot start processing until userspace sets up more processing. Is there any way to do it without those extra context switches?
I have an RGBIR camera, which is not supported by the ISP, so I’m going to do ISP-style work in CUDA. I would like to do this with the best performance easily achievable. Keeping a single buffer which the camera hardware writes to and CUDA reads from is straightforwards. From the TRM, it looks like the host controller should be able to trigger the GPU to execute some CUDA code with sync point without involving userspace or kernelspace code on the main CPU complex. It looks like Drive OS has APIs that achieve this, so I’m asking if there is one for Jetpack.