Image format conversion with NvBufferTransform

Hi,
I am using MIPI camera and the resolution is 4192*3120 UYVY. I am using dma buffer configuration in V4L2 application to capture the image. I am able to capture the image and verified the image by writing in to a file. I have to convert this image to YUV420M format. I am using NvBufferTransform. But, the image is full of lines only, I am not getting proper image. This pipeline works if I use mmap in v4l2.

I have followed 12_camera_v4l2_cuda sample application.

if (ctx->capture_dmabuf) {
// Cache sync for VIC operation
NvBufferMemSyncForDevice(ctx->g_buff[v4l2_buf.index].dmabuff_fd, 0, (void**)&ctx->g_buff[v4l2_buf.index].start);
}
else {
Raw2NvBuffer(ctx->g_buff[v4l2_buf.index].start, 0,ctx->cam_w, ctx->cam_h, ctx->g_buff[v4l2_buf.index].dmabuff_fd);
}

            // Convert the camera buffer from UYVY422 to YUV420P
        	if (-1 == NvBufferTransform(ctx->g_buff[v4l2_buf.index].dmabuff_fd, ctx->render_dmabuf_fd, &transParams))

I have configured the transparams:

memset(&transParams, 0, sizeof(transParams));
transParams.transform_flag = NVBUFFER_TRANSFORM_FILTER;
transParams.transform_filter = NvBufferTransform_Filter_Smart;
transParams.transform_flip = NvBufferTransform_None;

I am using the following params to create the buffer:
input_params_scale.payloadType = NvBufferPayload_SurfArray;
input_params_scale.width = 4192;
input_params_scale.height = 3120;
input_params_scale.layout = NvBufferLayout_Pitch;
input_params_scale.colorFormat = get_nvbuff_color_fmt(V4L2_PIX_FMT_YUV420M);
input_params_scale.nvbuf_tag = NvBufferTag_VIDEO_CONVERT;

if (-1 == NvBufferCreateEx(&ctx->scale_dmabuf_fd, &input_params_scale))  

The above pipeline works with mmap but not working with dma buffers. The latency is very high in mmap, so I can not use in my application.

I am using Jetpack 4.4

Any help/suggestion…?

Thanks,
JSP

Hi,
There is constraint in buffer alignment. Please refer to
High CPU usage for video capturing - #14 by DaneLLL

A user has shared a solution:
High CPU usage for video capturing - #19 by DaneLLL
Please check and give it a try.

Hi DaneLLL,
Thanks for your quick reply. I have tried the solution suggested by one of the user. But, still I am facing the same problem.

I am trying to set the VIDIOC_S_FMT of the v4l2 device bytesperline to 8448 (4192 2 +64 to make it aligned to 256). But when I read the configured parameter using VIDIOC_G_FMT it is not showing the configured value. It returns the bytesperline as 8384 (4192 2). Anything I am missing…? Any suggestion?

Thanks,
JSP

Hi,
The v4l2 device may not support setting arbitrary bytesperline. Are you able to set width to 4096? This should avoid the hardware alignment.

Hi DaneLLL,
I have tried with width 4096. But, it is not supported by the camera, So it not solving the problem. I think is it needed to look on to tegra cam driver to check the bytesperline …? Any suggestion …?

Hi,
The data alignment is constraint of Jetson platforms and would need you to check if the camera can output frame data meeting the constraint. Please check

  1. If the camera can output non-contiguous data. To output UYVY in pitch=8192, width=4192,height=3120
  2. If 1. is not supported, please check if camera can output 4096x3120

If the camera cannot adapt to either case, capturing to CPU buffer first and then copying to NvBuffer looks to be only solution.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.