Mpeg encoding on Jetson Nano for UYVY video

Hi,
I have a MIPI camera which is supplying video in UYVY format. I am using V4L2 to access the video. I want to encode this video to MPEG. Will the examples in the multi media framework helps this use case? If not, any other sample applications to refer?

Thanks.

Not sure I understand your use case…you may detail further what is the expectation with MPEG encoding.

Assuming you are refering to MPEG-4 container for H264 encoded video, you can do that quickly with gstreamer.
You may try first:

gst-launch-1.0 -e videotestsrc ! video/x-raw, format=UYVY, width=1920, height=1080, framerate=30/1 ! nvvidconv ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=test.mp4

that simulates a 1080p@30fps video source in UYVY format, then converts format with ISP, then encodes into H264 and muxes into mp4 container (you may also try qtmux in place of mp4mux). (If you’re running an older L4T release you would try omxh264enc instead of nvv4l2h264enc).
Just type Ctrl-C in same terminal where you’ve launched the pipeline for stopping.

You would be able to play it back (assuming you’re running X) with:

gst-launch-1.0 filesrc location=test.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! videoconvert ! xvimagesink

For older L4T releases, use omxh264dec instead of nvv4l2decoder.

So the last step if this works is replacing videotestsrc by your v4l camera (assuming here it is /dev/video0 and your sensor shows a mode with same resolution and framerate with v4l2-ctl -d /dev/video0 --list-formats-ext, you would adapt):

gst-launch-1.0 -e v4l2src device=/dev/video0 ! video/x-raw, format=UYVY, width=1920, height=1080, framerate=30/1 ! nvvidconv ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=test.mp4

If not working, you may also try various io-mode options of v4l2src plugin (see gst-inspect-1.0 v4l2src)

Thank you Honey!

I am able to run the gstreamer pipeline. I want to use, user application based on libargus library or V4L2 based library to access the MIPI camera which is pumping in UYVY format. I have referred tegra_multimedia_API samples (sample 05) for using the Nvidia HW accelerator to encode the UYVY in to MJPEG video. My question is,

  1. Is UYVY format is supported by NvJpegEncoder?
  2. If not, how to use the nvv4l2h264enc plugin in an application code ?
  3. What is the optimized way of accessing the MIPI camera and encode and the video?

Thanks!

Probably someone else would better answer to your questions.
For 1, it seems from gstreamer nvvidconv that you can use ISP HW to perform UYVY to I420 conversion that might help before nvjpegenc.
For other questions, probably you can find out from Argus/MMAPI samples, but someone else would better advise.

Hi, I have tried using the tegra mutimedia API sample application 12 to receive the video in UYVY format from the camera. The video is receiving fine. I have verified it by storing one frame. Then to encode the video, I have converted this in to YUV420M using the NvBufferTransform API. I have recorded the video using the following piece of code

static bool
save_yuv420(int dmabuf_fd)
{
NvBufferParams params = {0};
void *sBaseAddr[3] = {NULL};
int ret = 0;
int size;
unsigned i;
int file;

file = open(“/home/nano/yuv420.yuv”, O_CREAT | O_WRONLY | O_APPEND | O_TRUNC,
S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP | S_IROTH | S_IWOTH);

ret = NvBufferGetParams (dmabuf_fd, &params);
if (ret != 0)
ROS_ERROR(“%s: NvBufferGetParams Failed \n”, func);

for (i = 0; i < params.num_planes; i++) {
ret = NvBufferMemMap (dmabuf_fd, i, NvBufferMem_Read_Write, &sBaseAddr[i]);
if (ret != 0)
ROS_ERROR(“%s: NvBufferMemMap Failed \n”, func);

ret = NvBufferMemSyncForCpu (dmabuf_fd, i, &sBaseAddr[i]);
if (ret != 0)
  ROS_ERROR("%s: NvBufferMemSyncForCpu Failed \n", __func__);

size = params.height[i] * params.pitch[i];
if (-1 == write(file, sBaseAddr[i],size))
    ROS_ERROR("Error in yuv420 writing");

// }
ROS_INFO(“yuv420 size: %d, %d,%d”,params.height[i], params.width[i], params.pitch[i]);
ret = NvBufferMemSyncForDevice (dmabuf_fd, i, &sBaseAddr[i]);
if (ret != 0)
ROS_ERROR(“%s: NvBufferMemSyncForDevice Failed \n”, func);

ret = NvBufferMemUnMap (dmabuf_fd, i, &sBaseAddr[i]);
if (ret != 0)
  ROS_ERROR("%s: NvBufferMemUnMap Failed \n", __func__);

}

return true;
}

But, the recorded frame is showing only lines. What is the issue here, any help?

I have tried by integrating the sample 07 video converter to convert the UYVY to YUV420M again the video is full of lines. I am attaching the video for your reference. Please check![yuv420_issue|690x388]
(upload://rl8jRq7QVx7Uf6OP8WVp62WtvB8.jpeg)

Input image: UYVY 4192 * 3120

Hi,
Please share your release version for reference.
$ head -1 /etc/nv_tegra_release

In 12_camera_v4l2_cuda, it has called NvBufferTransform() to convert to YUV420, You can refer to 05_jpeg_encode to integrate encodeFromFd() for JPEG encoding.

Hi DaneLLL,
Thanks for your reply. Now, i am able to encode the UYVY video by following 12_camera_v4l2_cuda and 05_jpeg_encode . I am getting mjpeg video. But, I am not able to use the DMA buffer method. When I enable the DMA using the “capture_dmabuf” the YUV420 transformation is not giving proper frame. It is full of lines. I have followed 12_camera_v4l2_cuda example for this. could you please help to find the cause. I am using Jetpack 4.3 on Jetson Nano production version.

Thanks!

Hi,
Please share a patch on 12_camera_v4l2_cuda so that we can check and give suggestion.

Hi,
I have attached code snippet which calls the transform and encoder.

while (poll(fds, 1, 5000) > 0 && !quit)
{
if (fds[0].revents & POLLIN) {
struct v4l2_buffer v4l2_buf, v4l2_buf_ref;
NvBuffer *buffer;
// Dequeue camera buff
memset(&v4l2_buf, 0, sizeof(v4l2_buf));
v4l2_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (ctx->capture_dmabuf)
v4l2_buf.memory = V4L2_MEMORY_DMABUF;
else
v4l2_buf.memory = V4L2_MEMORY_MMAP;
if (ioctl(ctx->cam_fd, VIDIOC_DQBUF, &v4l2_buf) < 0)
ERROR(“Failed to dequeue camera buff: %s (%d)”,
strerror(errno), errno);
ctx->frame++;

        if (ctx->capture_dmabuf) {
            // Cache sync for VIC operation
            NvBufferMemSyncForDevice(ctx->g_buff[v4l2_buf.index].dmabuff_fd, 0,
                      (void**)&ctx->g_buff[v4l2_buf.index].start);
        }
        else {
            Raw2NvBuffer(ctx->g_buff[v4l2_buf.index].start, 0,
                         ctx->cam_w, ctx->cam_h, ctx->g_buff[v4l2_buf.index].dmabuff_fd);
        }

        // Convert the camera buffer from YUV422 to YUV420P
        if (-1 == NvBufferTransform(ctx->g_buff[v4l2_buf.index].dmabuff_fd, ctx->render_dmabuf_fd, &transParams))
            ERROR("Failed to convert the buffer");

        if (ctx->frame == ctx->save_n_frame)
            save_yuv420(ctx, ctx->render_dmabuf_fd);
        
        jpeg_encoder(ctx);

        // Enqueue camera buff
        if (ioctl(ctx->cam_fd, VIDIOC_QBUF, &v4l2_buf))
            ERROR("Failed to queue camera buffers: %s (%d)", strerror(errno), errno);
    }
}

With this, both YUV420 and MJPEG frames are full of lines, no video.

prepare_buffer is same as the example.

Thanks!

Hi,
We actually cannot investigate further with code snippet. It should work because 12_camera_vl42_cuda and 05_jpeg_encode work. Should be something wrong in integration. A patch on 12_camera_v4l2_cuda is helpful.