Transfer video frames from a PCIe capture card to Jetson TX1 device memory (for RT video processing)

Hi,

I’m trying to do real-time video processing on the Jetson TX1. I’ve got a Magewell ProCapture HDMI (PCIe capture card) connected to the PCIe slot on the Jetson, feeding uncompressed 1920x1080 4:4:4 RGB frames @ approx. 60fps. The card is claimed to be SGDMA-capable and Magewell’s SDK supplies a function which supposedly transfers frames to physical addresses (MWCaptureFrameToPhysicalAddress). I’ve tried/profiled the following methods of transferring the frames to device memory (for processing the data in CUDA kernels):

a) Set cudaDeviceMapHost flag. Malloc mapped memory on the host side ( cudaHostAlloc(… , cudaHostAllocMapped)). Get device pointer (cudaHostGetDevicePointer()). Use the Magewell API function (MWCaptureVideoFrameToVirtualAddressEx) to transfer the frame to this host memory location. So this is zero-copy (if I’m not wrong)

b) Use Magewell API function but without zero copy this time (malloc on host side, cudaMalloc on device side, use cudaMemcpy to transfer)

For c,d → the Magewell device shows up as a video input on V4L2 (/dev/videoX)

c) Malloc mapped memory on the host side (like (a)). Use the OpenCV VideoReader to read frames via the V4L2 interface into the mapped memory slot. So this is zero-copy (if I’m not wrong).

d) Malloc memory on the host side. Use the OpenCV VideoReader to read frames via the V4L2 interface into this host memory and then do cudaMemcpy to the cudaMalloc’ed device memory.

My question is: These are all methods that first write to host side memory and then either I transfer them to the device memory (via cudaMemcpy) or CUDA handles it when it’s zero-copy (I guess?). Is there a way to directly write these frames into device memory, bypassing the host side? I know this would be possible if I was using some GPUDirect-capable GPU but is there a similar option on the Jetson TX1 which would be faster than the above-mentioned methods (a-d)?

By the way the application does the following:

  • Take a 1920x1080 RGB frame (so approx. 6MB) into device memory (using one of the methods above)
  • Take the FFT of this frame (C2C cuFFT)
  • Element-wise multiplication with a complex number
  • Take the IFFT of the multiplication (C2C cuFFT)
  • Display the new frame on screen.

Thanks in advance for help,
Burak

Hi,

You can transfer frame to DMA buffer and register the buffer to be CUDA accessible.
Please check examples in tegra_multimedia_api.

Thanks.

Hi AastaLLL,

I guess you’re referring to a snippet in a source under “/home/ubuntu/tegra_multimedia_api/samples”, is that correct? Could you point to a specific example under tegra_multimedia_api?

Thanks for your help,
Burak

I’ve bumped into the following and think I have a grasp on the issue:

In Allen_Z’s post [1] he suggests the following:

AastaLLL confirms this is a good method:

Then dumbgeorge says he couldn’t decode the above conversation in his post [2] and WayneWWW says he needs to look at the “mmapi backend sample” and use the mapEGLImage2Float to map this into CUDA-accessible memory:

So from what I can understand from these, the procedure should be as follows:

During initialization:

  1. Call “int NvBufferCreate (int *dmabuf_fd, int width, int height, NvBufferLayout layout, NvBufferColorFormat colorFormat);” to get a pointer (dmabuf_fd) to a memory buffer (physical memory location to which I can write from the third party PCIe capture card?).
  2. Register the image to an “EGL display” that’s loaded onto this buffer using “NvEGLImageFromFd(EGLDisplay display, int *dmabuf_fd)” (I’m assuming there was a typo in that post when WayneWWW wrote “int dmabuf_fd” instead of “int *dmabuf_fd”)
  3. Use mapEGLImage2Float(…) to register this image to a CUDA kernel accessible memory location (which, I assume, is this “void* cuda_buf”, which would be a device pointer?)

During runtime:

  1. Use the third party PCIe capture card API to write into this physical memory location pointed to by “int *dmabuf_fd”
  2. Use the device pointer specified by “void* cuda_buf” above to process the input image.

AastaLLL could you confirm the above procedure or correct me if it’s not OK?

[1] CUDA Zero Copy On TX1 - Jetson TX1 - NVIDIA Developer Forums
[2] Translating CPU based OpenCV code to GPU based OpenCV code - Jetson TX1 - NVIDIA Developer Forums

Hi,

YES.

Please check backend sample. It will guide you to do this.
Thanks.

Hi AastaLLL,

I’m trying to initialize an NvBuffer using NvBufferCreate from the “nvbuf_utils” library as you’ve confirmed in step 1)

#define Mx 1024
#define My 1024

byte *data_mem;
int dmabuf_fd1 = 0;
int ret;
// int NvBufferCreate (int *dmabuf_fd, int width, int height, NvBufferLayout layout, NvBufferColorFormat colorFormat);
ret = NvBufferCreate(&dmabuf_fd1, (int) My, (int) Mx, NvBufferLayout_BlockLinear, NvBufferColorFormat_XRGB32);

EGLDisplay egl_display;
// Get defalut EGL display
egl_display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
if (egl_display == EGL_NO_DISPLAY)
{ 
    std::cout << "Error while get EGL display connection" << std::endl;
}
// Init EGL display connection
if (!eglInitialize(egl_display, NULL, NULL))
{
    std::cout << "Error while initialize EGL display connection" << std::endl;
}

EGLImageKHR egl_image = NULL;
egl_image = NvEGLImageFromFd(egl_display, dmabuf_fd1);
if(egl_image == NULL)
{
	std::cout << "NvEGLImageFromFd failed" << std::endl;
}

void *cuda_buf = ptr_d; // ptr_d is a device pointer I properly cudaMalloc beforehand
// map eglimage into GPU address
mapEGLImage2Float(&egl_image,  Mx, My, (byte *)cuda_buf);

I need the start memory address of this initialized buffer in order to copy the input frame to this memory location using the PCIe capture card API. How can I get this memory address? (I guess this dmabuf_fd is just a file descriptor which somehow signifies the buffer, not the memory address like I stated in my previous post. Right?)

Also, it seems like the buffer isn’t getting created correctly because I’m getting the following output:

NvEGLImageFromFd: Failed to create EGLImage from dma-buf fd (1828717745)
NvEGLImageFromFd failed
cuGraphicsEGLRegisterImage failed: 999, cuda process stop

Burak

Hi,

Sorry for the misleading.

You need a driver to switch input from PCIe to CSI interface.
Then you can open the camera with MMAPI. (Please open CSI camera with Argus)

Here is a relevant topic:
[url]https://devtalk.nvidia.com/default/topic/980505/visionworks-cannot-fetch-the-frame-from-dev-video0-tc358840-/?offset=23[/url]

Hi,

Could you answer the following questions please:

  1. In your last post did you mean to say a) or b) (see both below)? Or something else that I didn’t catch? Could you please give more details?

  2. If a) is true, does this mean there is no way to get frames coming from a PCIe capture card directly (not from host memory with cudaMemcpy or with zero copy) into CUDA kernel-accesible memory on a Jetson TX1?

  3. If b) is true, where can I get this driver?

  4. Assuming I’ve somehow (using your answer for question 1) made my inupt a CSI type input → I understand I need to use the createNvBuffer() from Argus library, which gives me a file descriptor (FD). Then I’ll pass this FD to NvEGLImageFromFd() and the EGL image from there to mapEGLImage2Float() to set up the buffer. I need the memory address of this buffer to copy data in there. How do I get the memory address of the buffer to which I need to feed the data? Is there a function in Argus that automatically does this copy by looking at the FD (do I actually not need this memory address)?


a) I need to get this TC358840, look at the post you linked, use their drivers etc. to set this chip up. Then use the createNvBuffer() function in the Argus library to get the buffer going using the CSI input, then NvEGLImageFromFd, then mapEGLImage2Float() → I have the a buffer linked to CUDA-accessible memory.

b) I need to get a software driver from somewhere to trick the Jetson into routing frames coming into my PCIe input to a CSI buffer, and then use the createNvBuffer() function in the Argus library to get the buffer going, then NvEGLImageFromFd, then mapEGLImage2Float() → I have a buffer linked to CUDA-accessible memory.


Thanks,
Burak

Hi,

After discussing internally, it will be much easier to implement a PCIe → v4l2 driver to make your camera work.

Here is the skeleton:

You can check if your camera vendor has implemented this driver or write by your own.

Thanks.

Hi AastaLLL,

Thanks for taking the time to discuss and for your recommendation but I really have a hard time bringing the pieces together from your answers. Can you please answer the numbered questions in my previous post?

About using a PCIe → V4L2 approach:

The vendor does have a driver for this, the device shows up as /dev/video1 and I can get frames using for example the OpenCV VideoReader but the latency is huge and I still have to use cudaMemcpy to get them into device memory for processing by my CUDA kernels since the copy via V4L2 is done into host memory so this is not an answer to my question.

I would really appreciate it if you could answer the numbered questions in my previous post.

Thanks for your help,
Burak

Hi,

You don’t need to get a TC358840. I post the topic just because you are facing the similar issue.

So the procedure should be:

  1. Enable the PCIe → V4l2:
    It’s good to know you have the driver already.

  2. Read camera frame with MMAPI. Once you have configured your camera to the v4l2 interface, you can open it as general a USB-camera.
    Check sample 12_camera_v4l2_cuda for details.

By the way, OpenCV uses CPU-based FFmpeg to decode camera frame. And it is slow.
Thanks.

Hi,

I don’t have a sample called “12_camera_v4l2_cuda”. ls from “~/tegra_multimedia_api/samples”:

00_video_decode
01_video_encode
02_video_dec_cuda
03_video_cuda_enc
04_video_dec_gie
05_jpeg_encode
06_jpeg_decode
07_video_convert
09_camera_jpeg_capture
10_camera_recording
11_camera_object_identification
backend
common
Rules.mk

Where can I download this sample “12_camera_v4l2_cuda”? I’ve got L4T “24.2.1”.

Thanks,
Burak

Please install JetPack3.1:

Thanks.

Hi Nvidia,

The OP asked a really good question that was never fully answered. I’m sure many people would want an answer to this question:

In other words, how to you go from the file-descriptor to an NvBuffer pointer?

We have:
int FD_of_the_created_NvBuffer

We want:
NvBuffer* pointer_to_the_NvBuffer_itself

Hi,

  1. NvBuffer is an open source class. You can find more information at ‘~/tegra_multimedia_api/samples/common/classes/NvBuffer.cpp’

  2. Check the constructor for data pointer:

this->planes[i].data = NULL;

NvMM buffer can’t directly access via CUDA. We use EGLImage for this purpose.
Wrap NvMM memory in EGLImage and Use that with CUDA.

Thanks for the reply. I understand that NvBuffer is an open source class but your response doesn’t answer my question.

I want to access the methods and fields of an NvBuffer like any other object.

But nvbuf_utils doesn’t give me an NvBuffer object, or even a pointer to an NvBuffer object. Instead, it gives me a file descriptor.

For example, I’m searching for the missing piece in the following pseudocode:

int file_descriptor_of_the_nvbuffer_instance;
NvBufferCreate(&file_descriptor_of_the_nvbuffer_instance, .......)

// now I have a file descriptor for the NvBuffer. But this isn't useful to me because I can't access methods and fields with a file descriptor

NvBuffer* pointer_to_the_nvbuffer_instance

// MISSING PIECE, WANT TO GET THE POINTER TO THE NVBUFFER REPRESENTED BY THE FILE DESCRIPTOR

// now I can access the methods and fields of the NvBuffer
// for example:

std::cout << pointer_to_the_nvbuffer_instance->planes[0].fmt.width << std::endl;

// or another example

pointer_to_the_nvbuffer_instance->planes[0].data = some_other_pointer

Hi,

Please check ‘~/tegra_multimedia_api/include/nvbuf_utils.h’

int NvBufferGetParams (int dmabuf_fd, NvBufferParams *params);
.....

typedef struct _NvBufferParams
{
  uint32_t dmabuf_fd;
<b>  void *nv_buffer;</b>
  uint32_t nv_buffer_size;
  uint32_t pixel_format;
  uint32_t num_planes;
  uint32_t width[MAX_NUM_PLANES];
  uint32_t height[MAX_NUM_PLANES];
  uint32_t pitch[MAX_NUM_PLANES];
  uint32_t offset[MAX_NUM_PLANES];
  uint32_t psize[MAX_NUM_PLANES];
  uint32_t layout[MAX_NUM_PLANES];
}NvBufferParams;

Thanks! That’s the first thing I tried, and that is what I thought I was looking for, but it doesn’t seem to work.

For example, see the following example:

int fd;
NvBufferCreate(&fd, 1920, 1080, NvBufferLayout_Pitch, NvBufferColorFormat_UYVY);

NvBufferParams params;
NvBufferGetParams(fd, &params);
// parameters are set properly here, and I can access the nv_buffer field of the params struct

NvBuffer* nvbuf_ptr = (NvBuffer*) params.nv_buffer;

std::cout << nvbuf_ptr->planes[0].fmt.width << std::endl;
// compiles and runs without error, but outputs garbage data

Hi,

Please check this topic:
https://devtalk.nvidia.com/default/topic/995449/how-to-get-the-nvbuffer-from-nvbuffergetparams/

Thanks.

Hi every
Currently, We want to develop a board with xilinx FPGA and TX2,and tx2 capture video from FPGA by PCIE GEN2X4.
We have a linux driver is work fine in intel x86 CPU but do not test in TX2.We alse worry about memory copy could take more time in tx2 platform.Our driver use “dma_alloc_coherent” linux API with 4MB size,and FPGA have a DMA transfer video data to TX2 DRAM. Could we create NVBUF(V4L2_MEMORY_DMABUF) and get it address point and give it to fpga DMA write directly? OR do you have any way to reduce memory copy times?

PM Sent From: http://devtalk.nvidia.com/default/topic/1020563/