Hi,
I was trying to use Jetson NANO with a USB camera. I want to somehow feed the camera image into CUDA. I was following the 12_camera_v4l2_cuda (https://docs.nvidia.com/jetson/l4t-multimedia/l4t_mm_12_camera_v4l2_cuda.html) example. It says I can apply CUDA code in the function HandleEGLImage
, but it is applying the CUDA process on EGL images. I don’t want to use EGL images. Can I somehow map a device pointer to a DAM buffer?
In the camera_v4l2_cuda.cpp
example file, it gets a dmabuff_fd
during the camera stream.
if (ctx->capture_dmabuf) {
// Cache sync for VIC operation
NvBufferMemSyncForDevice(ctx->g_buff[v4l2_buf.index].dmabuff_fd, 0,
(void**)&ctx->g_buff[v4l2_buf.index].start);
} else {
Raw2NvBuffer(ctx->g_buff[v4l2_buf.index].start, 0,
ctx->cam_w, ctx->cam_h, ctx->g_buff[v4l2_buf.index].dmabuff_fd);
}
I wanted to use device to device CUDA memory copy from ctx->g_buff[v4l2_buf.index].start
directly to a allocated device pointer, assuming it’s ready for the GPU usage. However, it seems not the case and I always get zeros in the copied memory.
I’m wondering what the correct way is. Thank you very much!
Best,
Caili