NVIDIA Multimedia APIs with UYVY sensor

Dear all,

I want to use NVIDIA Multimedia APIs to convert UYVY frame to RGB instead of using openCV. Because it took about 40-50ms with openCV so I want to shorten it.

After following many steps in this thread :
https://devtalk.nvidia.com/default/topic/946840/jetson-tx1/soc_camera-driver-in-l4t-r24-1/1
I can run our camera driver with v4l2.

Then I tried to build argus and run simple app like argus_oneshot. I raised errors as below

Failed to get camera devices
(Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 212)
(Argus) Error EndOfFile: Receive worker failure, notifying 1 waiting threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 287)
(Argus) Error InvalidState: Argus client is exiting with 1 outstanding client threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 304)
(Argus) Error EndOfFile: Receiving thread terminated with error (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadWrapper(), line 315)
(Argus) Error EndOfFile: Client thread received an error from socket (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 147)
(Argus) Error EndOfFile:  (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 101)

I also tried kill argus daemon, export environment then run argus daemon again as ShaneCCC advised

kill the argus_daemon then
export enableCamScfLogs=1
export enableCamPclLogs=1
/usr/sbin/argus_daemon

This’s the result:

(Argus) Error NotSupported: Socket bind failed; error=Address already in use (in libs/rpc_socket_server/RpcSocketServer.cpp, function runCore(), line 96)

Thanks,
Vu Nguyen

Hi Vu Nguyen,
1 You are now on r24.1?
2 Is your sensor YUV sensor or BAYER snesor?

Hi DaneLLL,

Thank for your reply.

  1. I’m on 24.2 (Latest Jetpack 2.3.1)
  2. My camera is ISX017. I think it’s YUV sensor.

Hi Vu Nguyen,
Please refer to the sample
[url]https://devtalk.nvidia.com/default/topic/984850/jetson-tx1/how-to-convert-yuv-to-jpg-using-jpeg-encoder-hardware-/post/5048479/#5048479[/url]

After you are able to run the sample, you need to integrate tegra_multimedia_api/samples/07_video_convert for UYVY → RGBA/BGRx conversion.

Hi DaneLLL,

Thank for your reply. After refer that topic, I tried to modify to encode raw image capturing from camera. But I can not see any callback event
Can you help me review this code

/*
 * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
 * are met:
 *  * Redistributions of source code must retain the above copyright
 *    notice, this list of conditions and the following disclaimer.
 *  * Redistributions in binary form must reproduce the above copyright
 *    notice, this list of conditions and the following disclaimer in the
 *    documentation and/or other materials provided with the distribution.
 *  * Neither the name of NVIDIA CORPORATION nor the names of its
 *    contributors may be used to endorse or promote products derived
 *    from this software without specific prior written permission.
 *
 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
 * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
 * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
 * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
 * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
 * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */

#include <stdio.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <sys/stat.h>
#include <sys/mman.h>
#include <fcntl.h>
#include <errno.h>
#include <stdlib.h>
#include <signal.h>
#include <poll.h>
#include <iostream>
#include <fstream>

#include <Argus/Argus.h>
#include <EGLStream/EGLStream.h>
#include <EGLStream/NV/ImageNativeBuffer.h>

#include "NvVideoEncoder.h"
#include <NvApplicationProfiler.h>
#include "NvEglRenderer.h"
#include "NvUtils.h"
#include "NvCudaProc.h"
#include "nvbuf_utils.h"

#include "camera_v4l2_cuda.h"

static bool quit = false;

using namespace std;
using namespace Argus;

// Constant configuration.
static const int    MAX_ENCODER_FRAMES = 5;
static const int    DEFAULT_FPS        = 30;

static bool DO_STAT = true;
static uint32_t     ENCODER_PIXFMT = V4L2_PIX_FMT_H264;
static Size         STREAM_SIZE (1280, 720);
static bool encoderCapturePlaneDqCallback(
            struct v4l2_buffer *v4l2_buf,
            NvBuffer *buffer,
            NvBuffer *shared_buffer,
            void *arg);
bool encoderOutputPlaneDqCallback(struct v4l2_buffer *v4l2_buf,
                                                   NvBuffer * buffer,
                                                   NvBuffer * shared_buffer,
                                                   void *arg); 
static void abort(context_t *ctx);

static void
print_usage(void)
{
    printf("\n\tUsage: camera_v4l2_cuda [OPTIONS]\n\n"
           "\tExample: \n"
           "\t./camera_v4l2_cuda -d /dev/video0 -s 640x480 -f YUYV -n 30 -c\n\n"
           "\tSupported options:\n"
           "\t-d\t\tSet V4l2 video device node\n"
           "\t-s\t\tSet output resolution of video device\n"
           "\t-f\t\tSet output pixel format of video device (supports only YUYV/YVYU/UYVY/VYUY)\n"
           "\t-r\t\tSet renderer frame rate (30 fps by default)\n"
           "\t-n\t\tSave the n-th frame before VIC processing\n"
           "\t-c\t\tEnable CUDA aglorithm (draw a black box in the upper left corner)\n"
           "\t-v\t\tEnable verbose message\n"
           "\t-h\t\tPrint this usage\n\n"
           "\tNOTE: It runs infinitely until you terminate it with <ctrl+c>\n");
}

static bool parse_cmdline(context_t * ctx, int argc, char **argv)
{
    int c;

    if (argc < 2)
    {
        print_usage();
        exit(EXIT_SUCCESS);
    }

    while ((c = getopt(argc, argv, "d:s:f:r:n:cvh")) != -1)
    {
        switch (c)
        {
            case 'd':
                ctx->cam_devname = optarg;
                break;
            case 's':
                if (sscanf(optarg, "%dx%d",
                            &ctx->cam_w, &ctx->cam_h) != 2)
                {
                    print_usage();
                    return false;
                }
                break;
            case 'f':
                if (strcmp(optarg, "YUYV") == 0)
                    ctx->cam_pixfmt = V4L2_PIX_FMT_YUYV;
                else if (strcmp(optarg, "YVYU") == 0)
                    ctx->cam_pixfmt = V4L2_PIX_FMT_YVYU;
                else if (strcmp(optarg, "VYUY") == 0)
                    ctx->cam_pixfmt = V4L2_PIX_FMT_VYUY;
                else if (strcmp(optarg, "UYVY") == 0)
                    ctx->cam_pixfmt = V4L2_PIX_FMT_UYVY;
                else
                {
                    print_usage();
                    return false;
                }
                break;
            case 'r':
                ctx->fps = strtol(optarg, NULL, 10);
                break;
            case 'n':
                ctx->save_n_frame = strtol(optarg, NULL, 10);
                break;
            case 'c':
                ctx->enable_cuda = true;
                break;
            case 'v':
                ctx->enable_verbose = true;
                break;
            case 'h':
                print_usage();
                exit(EXIT_SUCCESS);
                break;
            default:
                print_usage();
                return false;
        }
    }

    return true;
}

static void
set_defaults(context_t * ctx)
{
    memset(ctx, 0, sizeof(context_t));

    ctx->cam_devname = "/dev/video0";
    ctx->cam_fd = -1;
    ctx->cam_pixfmt = V4L2_PIX_FMT_YUYV;
    ctx->cam_w = 1280;
    ctx->cam_h = 720;
    ctx->frame = 0;
    ctx->save_n_frame = 0;

    ctx->g_buff = NULL;
    ctx->renderer = NULL;
    ctx->got_error = false;
    ctx->fps = 30;

    pthread_mutex_init(&ctx->queue_lock, NULL);
    pthread_cond_init(&ctx->queue_cond, NULL);

    ctx->enable_cuda = false;
    ctx->egl_image = NULL;
    ctx->egl_display = EGL_NO_DISPLAY;

    ctx->enable_verbose = false;
    sprintf(ctx->cam_file, "encode.mp4");

    ctx->enc_output_plane_buf_queue = new queue < nv_buffer * >;
}

static nv_color_fmt nvcolor_fmt[] =
{
    // TODO add more pixel format mapping
    {V4L2_PIX_FMT_UYVY, NvBufferColorFormat_UYVY},
    {V4L2_PIX_FMT_VYUY, NvBufferColorFormat_VYUY},
    {V4L2_PIX_FMT_YUYV, NvBufferColorFormat_YUYV},
    {V4L2_PIX_FMT_YVYU, NvBufferColorFormat_YVYU},
};

static NvBufferColorFormat
get_nvbuff_color_fmt(unsigned int v4l2_pixfmt)
{
  for (unsigned i = 0; i < sizeof(nvcolor_fmt); i++)
  {
    if (v4l2_pixfmt == nvcolor_fmt[i].v4l2_pixfmt)
      return nvcolor_fmt[i].nvbuff_color;
  }

  return NvBufferColorFormat_Invalid;
}

static bool
camera_initialize(context_t * ctx)
{
    struct v4l2_format fmt;

    // Open camera device
    ctx->cam_fd = open(ctx->cam_devname, O_RDWR);
    if (ctx->cam_fd == -1)
        ERROR_RETURN("Failed to open camera device %s: %s (%d)",
                ctx->cam_devname, strerror(errno), errno);

    // Set camera output format
    memset(&fmt, 0, sizeof(fmt));
    fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    fmt.fmt.pix.width = ctx->cam_w;
    fmt.fmt.pix.height = ctx->cam_h;
    fmt.fmt.pix.pixelformat = ctx->cam_pixfmt;
    fmt.fmt.pix.field = V4L2_FIELD_INTERLACED;
    if (ioctl(ctx->cam_fd, VIDIOC_S_FMT, &fmt) < 0)
        ERROR_RETURN("Failed to set camera output format: %s (%d)",
                strerror(errno), errno);

    // Get the real format in case the desired is not supported
    memset(&fmt, 0, sizeof fmt);
    fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    if (ioctl(ctx->cam_fd, VIDIOC_G_FMT, &fmt) < 0)
        ERROR_RETURN("Failed to get camera output format: %s (%d)",
                strerror(errno), errno);
    if (fmt.fmt.pix.width != ctx->cam_w ||
            fmt.fmt.pix.height != ctx->cam_h ||
            fmt.fmt.pix.pixelformat != ctx->cam_pixfmt)
    {
        WARN("The desired format is not supported");
        ctx->cam_w = fmt.fmt.pix.width;
        ctx->cam_h = fmt.fmt.pix.height;
        ctx->cam_pixfmt =fmt.fmt.pix.pixelformat;
    }

    INFO("Camera ouput format: (%d x %d)  stride: %d, imagesize: %d",
            fmt.fmt.pix.width,
            fmt.fmt.pix.height,
            fmt.fmt.pix.bytesperline,
            fmt.fmt.pix.sizeimage);

    return true;
}

static bool createVideoEncoder(context_t * ctx)
{
    int ret = 0;

    ctx->m_VideoEncoder = NvVideoEncoder::createVideoEncoder("enc0");
    if (!ctx->m_VideoEncoder)
        ERROR_RETURN("Could not create m_VideoEncoderoder");

    if (DO_STAT)
        ctx->m_VideoEncoder->enableProfiling();

    ret = ctx->m_VideoEncoder->setCapturePlaneFormat(ENCODER_PIXFMT, STREAM_SIZE.width,
                                    STREAM_SIZE.height, 2 * 1024 * 1024);
    if (ret < 0)
        ERROR_RETURN("Could not set capture plane format");

    ret = ctx->m_VideoEncoder->setOutputPlaneFormat(V4L2_PIX_FMT_YUV420M, STREAM_SIZE.width,
                                    STREAM_SIZE.height);
    if (ret < 0)
        ERROR_RETURN("Could not set output plane format");

    ret = ctx->m_VideoEncoder->setBitrate(4 * 1024 * 1024);
    if (ret < 0)
        ERROR_RETURN("Could not set bitrate");

    if (ENCODER_PIXFMT == V4L2_PIX_FMT_H264)
    {
        ret = ctx->m_VideoEncoder->setProfile(V4L2_MPEG_VIDEO_H264_PROFILE_HIGH);
    }
    else
    {
        ret = ctx->m_VideoEncoder->setProfile(V4L2_MPEG_VIDEO_H265_PROFILE_MAIN);
    }
    if (ret < 0)
        ERROR_RETURN("Could not set m_VideoEncoderoder profile");

    if (ENCODER_PIXFMT == V4L2_PIX_FMT_H264)
    {
        ret = ctx->m_VideoEncoder->setLevel(V4L2_MPEG_VIDEO_H264_LEVEL_5_0);
        if (ret < 0)
            ERROR_RETURN("Could not set m_VideoEncoderoder level");
    }

    ret = ctx->m_VideoEncoder->setRateControlMode(V4L2_MPEG_VIDEO_BITRATE_MODE_CBR);
    if (ret < 0)
        ERROR_RETURN("Could not set rate control mode");

    ret = ctx->m_VideoEncoder->setIFrameInterval(30);
    if (ret < 0)
        ERROR_RETURN("Could not set I-frame interval");

    ret = ctx->m_VideoEncoder->setFrameRate(30, 1);
    if (ret < 0)
        ERROR_RETURN("Could not set m_VideoEncoderoder framerate");

    // Query, Export and Map the output plane buffers so that we can read
    // raw data into the buffers
    ret = ctx->m_VideoEncoder->output_plane.setupPlane(V4L2_MEMORY_DMABUF, 10, false, false);
    if (ret < 0)
        ERROR_RETURN("Could not setup output plane");

    // Query, Export and Map the output plane buffers so that we can write
    // m_VideoEncoderoded data from the buffers
    ret = ctx->m_VideoEncoder->capture_plane.setupPlane(V4L2_MEMORY_MMAP, 10, true, false);
    if (ret < 0)
        ERROR_RETURN("Could not setup capture plane");

    printf("create video encoder return true\n");
    return true;
}

static bool init_components(context_t * ctx)
{
    /*Initialize Camera*/
    if (!camera_initialize(ctx))
        ERROR_RETURN("Failed to initialize camera device");
    
    /*Create Video Encoder*/
    if (!createVideoEncoder(ctx))
	ERROR_RETURN("Failed to initialize encoder");

    /*Create an output file*/
    ctx->m_outputFile = new std::ofstream(ctx->cam_file);

printf("Initialize v4l2 componenets successfully\n");
    INFO("Initialize v4l2 components successfully");
    return true;
}

static bool request_camera_buff(context_t *ctx)
{
    // Request camera v4l2 buffer
    struct v4l2_requestbuffers rb;
    memset(&rb, 0, sizeof(rb));
    rb.count = V4L2_BUFFERS_NUM;
    rb.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    rb.memory = V4L2_MEMORY_DMABUF;
    if (ioctl(ctx->cam_fd, VIDIOC_REQBUFS, &rb) < 0)
        ERROR_RETURN("Failed to request v4l2 buffers: %s (%d)",
                strerror(errno), errno);
    if (rb.count != V4L2_BUFFERS_NUM)
        ERROR_RETURN("V4l2 buffer number is not as desired");

    for (unsigned int index = 0; index < V4L2_BUFFERS_NUM; index++)
    {
        struct v4l2_buffer buf;

        // Query camera v4l2 buf length
        memset(&buf, 0, sizeof buf);
        buf.index = index;
        buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        buf.memory = V4L2_MEMORY_DMABUF;

        if (ioctl(ctx->cam_fd, VIDIOC_QUERYBUF, &buf) < 0)
            ERROR_RETURN("Failed to query buff: %s (%d)",
                    strerror(errno), errno);

        // TODO add support for multi-planer
        // Enqueue empty v4l2 buff into camera capture plane
        buf.m.fd = (unsigned long)ctx->g_buff[index].dmabuff_fd;
        if (buf.length != ctx->g_buff[index].size)
        {
            WARN("Camera v4l2 buf length is not expected");
            ctx->g_buff[index].size = buf.length;
        }

        if (ioctl(ctx->cam_fd, VIDIOC_QBUF, &buf) < 0)
            ERROR_RETURN("Failed to enqueue buffers: %s (%d)",
                    strerror(errno), errno);
    }

    return true;
}

static bool prepare_buffers(context_t * ctx)
{
    // Allocate global buffer context
    ctx->g_buff = (nv_buffer *)malloc(V4L2_BUFFERS_NUM * sizeof(nv_buffer));
    if (ctx->g_buff == NULL)
        ERROR_RETURN("Failed to allocate global buffer context");

    // Create buffer and share it with camera and VIC output plane
    for (unsigned int index = 0; index < V4L2_BUFFERS_NUM; index++)
    {
        int fd;
        NvBufferParams params = {0};

        if (-1 == NvBufferCreate(&fd, ctx->cam_w, ctx->cam_h,
                    NvBufferLayout_Pitch,
                    get_nvbuff_color_fmt(ctx->cam_pixfmt)))
            ERROR_RETURN("Failed to create NvBuffer");

        ctx->g_buff[index].dmabuff_fd = fd;

        if (-1 == NvBufferGetParams(fd, &params))
            ERROR_RETURN("Failed to get NvBuffer parameters");

        // TODO add multi-planar support
        // Currently it supports only YUV422 interlaced single-planar
        ctx->g_buff[index].size = params.height[0] * params.pitch[0];
        ctx->g_buff[index].start = (unsigned char *)mmap(
                NULL,
                ctx->g_buff[index].size,
                PROT_READ | PROT_WRITE,
                MAP_SHARED,
                ctx->g_buff[index].dmabuff_fd, 0);
    }

    if (!request_camera_buff(ctx))
        ERROR_RETURN("Failed to set up camera buff");

    // Enqueue all the empty output plane buffers
    for (uint32_t i = 0; i < ctx->m_VideoEncoder->output_plane.getNumBuffers(); i++)
    {
        struct v4l2_buffer v4l2_buf;
        struct v4l2_plane planes[MAX_PLANES];

        memset(&v4l2_buf, 0, sizeof(v4l2_buf));
        memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

        v4l2_buf.index = i;
        v4l2_buf.m.planes = planes;

        if(ctx->m_VideoEncoder->output_plane.qBuffer(v4l2_buf, NULL) < 0) {
	   abort(ctx);
           ERROR_RETURN("Failed to queue buffer on VIC capture plane");
	}
    }

    // Enqueue all the empty capture plane buffers
    for (uint32_t i = 0; i < ctx->m_VideoEncoder->capture_plane.getNumBuffers(); i++)
    {
        struct v4l2_buffer v4l2_buf;
        struct v4l2_plane planes[MAX_PLANES];

        memset(&v4l2_buf, 0, sizeof(v4l2_buf));
        memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

        v4l2_buf.index = i;
        v4l2_buf.m.planes = planes;

        if(ctx->m_VideoEncoder->capture_plane.qBuffer(v4l2_buf, NULL) < 0) {
	   abort(ctx);
           ERROR_RETURN("Failed to queue buffer on VIC capture plane");
	}
    }

    INFO("Succeed in preparing stream buffers");
    return true;
}

static bool start_stream(context_t * ctx)
{
    enum v4l2_buf_type type;
    int e;

    // Start v4l2 streaming
    type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    if (ioctl(ctx->cam_fd, VIDIOC_STREAMON, &type) < 0)
        ERROR_RETURN("Failed to start streaming: %s (%d)",
                strerror(errno), errno);
    /*Stream on*/
    e = ctx->m_VideoEncoder->output_plane.setStreamStatus(true);
    if (e < 0)
        ERROR_RETURN("Failed to stream on output plane");
    e = ctx->m_VideoEncoder->capture_plane.setStreamStatus(true);
    if (e < 0)
        ERROR_RETURN("Failed to stream on capture plane");

    usleep(200);

    INFO("Camera video streaming on ...");
    return true;
}

bool encoderOutputPlaneDqCallback(struct v4l2_buffer *v4l2_buf,
                                                   NvBuffer * buffer,
                                                   NvBuffer * shared_buffer,
                                                   void *arg) {
    context_t *ctx = (context_t *) arg;
    nv_buffer * cam_g_buff;

    printf("Output plane callback\n");
    if (!v4l2_buf)
    {
        abort(ctx);
        ERROR_RETURN("Failed to dequeue conv output plane buffer");
    }

    // Fetch nv_buffer to do format conversion
    pthread_mutex_lock(&ctx->queue_lock);
    while (ctx->enc_output_plane_buf_queue->empty())
    {
        pthread_cond_wait(&ctx->queue_cond, &ctx->queue_lock);
    }
    cam_g_buff = ctx->enc_output_plane_buf_queue->front();
    ctx->enc_output_plane_buf_queue->pop();
    pthread_mutex_unlock(&ctx->queue_lock);

    // Got EOS signal and return
    if (cam_g_buff->dmabuff_fd == 0)
        return false;
    else
    {
        // Enqueue enc output plane
        v4l2_buf->m.planes[0].m.fd =
            (unsigned long)cam_g_buff->dmabuff_fd;
        v4l2_buf->m.planes[0].bytesused = cam_g_buff->size;
    }

    if (ctx->m_VideoEncoder->output_plane.qBuffer(*v4l2_buf, NULL) < 0)
    {
        abort(ctx);
        ERROR_RETURN("Failed to enqueue VIC output plane");
    }

    return true;
}

bool encoderCapturePlaneDqCallback(struct v4l2_buffer *v4l2_buf,
                                                   NvBuffer * buffer,
                                                   NvBuffer * shared_buffer,
                                                   void *arg) {

    printf("Capture plane callback\n");
    context_t * ctx = (context_t*)arg;
    if (!v4l2_buf) {
	abort(ctx);
	ERROR_RETURN("Failed to dequeue buffer from encoder capture plane");
    }

    ctx->m_outputFile->write((char*) buffer->planes[0].data, buffer->planes[0].bytesused);
    
    if (ctx->m_VideoEncoder->capture_plane.qBuffer(*v4l2_buf, buffer) < 0) {
    	abort(ctx);
	ERROR_RETURN("Error while Qing buffer at capture plane");
    }

    // GOT EOS from m_VideoEncoderoder. Stop dqthread.
    if (buffer->planes[0].bytesused == 0)
    {
        ERROR_RETURN("Got EOS, exiting...\n");
        return false;
    }

    return true;
}

static void abort(context_t *ctx)
{
    ctx->got_error = true;
    if (ctx->m_VideoEncoder)
        ctx->m_VideoEncoder->abort();
}

static void signal_handle(int signum)
{
    printf("Quit due to exit command from user!\n");
    quit = true;
}

static bool start_capture(context_t * ctx)
{
    struct sigaction sig_action;
    struct pollfd fds[1];
    int bufferIndex = 0; 

    // Ensure a clean shutdown if user types <ctrl+c>
    sig_action.sa_handler = signal_handle;
    sigemptyset(&sig_action.sa_mask);
    sig_action.sa_flags = 0;
    sigaction(SIGINT, &sig_action, NULL);

    // Set video encoder callback
    ctx->m_VideoEncoder->capture_plane.setDQThreadCallback(encoderCapturePlaneDqCallback);
    ctx->m_VideoEncoder->output_plane.setDQThreadCallback(encoderOutputPlaneDqCallback);

    // startDQThread starts a thread internally which calls the
    // encoderCapturePlaneDqCallback whenever a buffer is dequeued
    // on the plane
    ctx->m_VideoEncoder->capture_plane.startDQThread(ctx);
    ctx->m_VideoEncoder->output_plane.startDQThread(ctx);

    fds[0].fd = ctx->cam_fd;
    fds[0].events = POLLIN;
    while (poll(fds, 1, 5000) > 0 && !ctx->got_error && !quit)
    {
        if (fds[0].revents & POLLIN) {
	    struct v4l2_buffer v4l2_buf;

            // Dequeue camera buff
            memset(&v4l2_buf, 0, sizeof(v4l2_buf));
            v4l2_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
            v4l2_buf.memory = V4L2_MEMORY_DMABUF;
            if (ioctl(ctx->cam_fd, VIDIOC_DQBUF, &v4l2_buf) < 0) {
                ERROR_RETURN("Failed to dequeue camera buff: %s (%d)",
                        strerror(errno), errno);
  	    }

            ctx->frame++;

            // Push nv_buffer into conv output queue for conversion
            pthread_mutex_lock(&ctx->queue_lock);
            ctx->enc_output_plane_buf_queue->push(&ctx->g_buff[v4l2_buf.index]);
    	    //ctx->m_outputFile->write((char*) ctx->g_buff[v4l2_buf.index].start, ctx->g_buff[v4l2_buf.index].size);
            pthread_cond_broadcast(&ctx->queue_cond);
            pthread_mutex_unlock(&ctx->queue_lock);

            // Enqueue camera buff
            // It might be more reasonable to wait for the completion of
            // VIC processing before enqueue current buff. But VIC processing
            // time is far less than camera frame interval, so we probably
            // don't need such synchonization.
            if (ioctl(ctx->cam_fd, VIDIOC_QBUF, &v4l2_buf))
                ERROR_RETURN("Failed to queue camera buffers: %s (%d)",
                        strerror(errno), errno);
        }
    }

    if (quit)
    {
        // Signal EOS to the dq thread of VIC output plane
        ctx->g_buff[0].dmabuff_fd = 0;

        pthread_mutex_lock(&ctx->queue_lock);
        //TODO: ctx->conv_output_plane_buf_queue->push(&ctx->g_buff[0]);
        pthread_cond_broadcast(&ctx->queue_cond);
        pthread_mutex_unlock(&ctx->queue_lock);
    }

    // Stop Encoder thread
    if (!ctx->got_error)
    {
        ctx->m_VideoEncoder->waitForIdle(2000);
        ctx->m_VideoEncoder->capture_plane.stopDQThread();
        ctx->m_VideoEncoder->output_plane.stopDQThread();
    }

    return true;
}

static bool stop_stream(context_t * ctx)
{
    enum v4l2_buf_type type;

    // Stop v4l2 streaming
    type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    if (ioctl(ctx->cam_fd, VIDIOC_STREAMOFF, &type))
        ERROR_RETURN("Failed to stop streaming: %s (%d)",
                strerror(errno), errno);

    INFO("Camera video streaming off ...");
    return true;
}

int main(int argc, char *argv[])
{
    context_t ctx;
    int error = 0;

    set_defaults(&ctx);

    CHECK_ERROR(parse_cmdline(&ctx, argc, argv), cleanup,
            "Invalid options specified");

    CHECK_ERROR(init_components(&ctx), cleanup,
            "Failed to initialize v4l2 components");

    CHECK_ERROR(prepare_buffers(&ctx), cleanup,
            "Failed to prepare v4l2 buffs");

    CHECK_ERROR(start_stream(&ctx), cleanup,
            "Failed to start streaming");

    CHECK_ERROR(start_capture(&ctx), cleanup,
            "Failed to start capturing")

    CHECK_ERROR(stop_stream(&ctx), cleanup,
            "Failed to stop streaming");

cleanup:
    if (ctx.cam_fd > 0)
        close(ctx.cam_fd);

    if (ctx.g_buff != NULL)
    {
        for (unsigned i = 0; i < V4L2_BUFFERS_NUM; i++)
            if (ctx.g_buff[i].dmabuff_fd)
                NvBufferDestroy(ctx.g_buff[i].dmabuff_fd);
        free(ctx.g_buff);
    }

    if (error)
        printf("App run failed\n");
    else
        printf("App run was successful\n");

    return -error;
}

If I feed raw data to encoder directly as below it got segmentation fault

while (poll(fds, 1, 5000) > 0 && !ctx->got_error && !quit)
    {
        if (fds[0].revents & POLLIN) {
            struct v4l2_buffer v4l2_buf;
    
            // Dequeue camera buff
            memset(&v4l2_buf, 0, sizeof(v4l2_buf));
            v4l2_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
            v4l2_buf.memory = V4L2_MEMORY_DMABUF;
            if (ioctl(ctx->cam_fd, VIDIOC_DQBUF, &v4l2_buf) < 0) {
                ERROR_RETURN("Failed to dequeue camera buff: %s (%d)",
                        strerror(errno), errno);
            }

            ctx->frame++;

            // Push nv_buffer into conv output queue for conversion
            pthread_mutex_lock(&ctx->queue_lock);
            //ctx->enc_output_plane_buf_queue->push(&ctx->g_buff[v4l2_buf.index]);
            //ctx->m_outputFile->write((char*) ctx->g_buff[v4l2_buf.index].start, ctx->g_buff[v4l2_buf.index].size);
            // Enqueue enc output plane
            nv_buffer * cam_g_buff = (nv_buffer*) &ctx->g_buff[v4l2_buf.index];
            v4l2_buf.m.planes[0].m.fd = (unsigned long)cam_g_buff->dmabuff_fd;
            v4l2_buf.m.planes[0].bytesused = cam_g_buff->size;
     
            if (ctx->m_VideoEncoder->output_plane.qBuffer(v4l2_buf, NULL) < 0) {
                abort(ctx);
                ERROR_RETURN("Failed to enqueue VIC output plane");
            }

            pthread_cond_broadcast(&ctx->queue_cond);
            pthread_mutex_unlock(&ctx->queue_lock);

            // Enqueue camera buff
            // It might be more reasonable to wait for the completion of
            // VIC processing before enqueue current buff. But VIC processing
            // time is far less than camera frame interval, so we probably
            // don't need such synchonization.
            if (ioctl(ctx->cam_fd, VIDIOC_QBUF, &v4l2_buf))
                ERROR_RETURN("Failed to queue camera buffers: %s (%d)",
                        strerror(errno), errno);
        }

Thanks and Best Regards,
Vu Nguyen

Hi Vu Nguyen,
Please refer to 10_camera_recording about video encoders:

// Enqueue all the empty capture plane buffers
    for (uint32_t i = 0; i < m_VideoEncoder->capture_plane.getNumBuffers(); i++)
    {
        struct v4l2_buffer v4l2_buf;
        struct v4l2_plane planes[MAX_PLANES];

        memset(&v4l2_buf, 0, sizeof(v4l2_buf));
        memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

        v4l2_buf.index = i;
        v4l2_buf.m.planes = planes;

        CHECK_ERROR(m_VideoEncoder->capture_plane.qBuffer(v4l2_buf, NULL));
    }
struct v4l2_buffer v4l2_buf;
        struct v4l2_plane planes[MAX_PLANES];

        memset(&v4l2_buf, 0, sizeof(v4l2_buf));
        memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

        v4l2_buf.m.planes = planes;

        // Check if we need dqBuffer first
        if (bufferIndex < MAX_ENCODER_FRAMES &&
            m_VideoEncoder->output_plane.getNumQueuedBuffers() <
            m_VideoEncoder->output_plane.getNumBuffers())
        {
            // The queue is not full, no need to dqBuffer
            // Prepare buffer index for the following qBuffer
            v4l2_buf.index = bufferIndex++;
        }
        else
        {
            // Output plane full or max outstanding number reached
            CHECK_ERROR(m_VideoEncoder->output_plane.dqBuffer(v4l2_buf, &buffer,
                                                              NULL, 10));
        }
// Push the frame into V4L2.
        v4l2_buf.m.planes[0].m.fd = fd;
        v4l2_buf.m.planes[0].bytesused = 1; // byteused must be non-zero
        CHECK_ERROR(m_VideoEncoder->output_plane.qBuffer(v4l2_buf, NULL));

Hi DaneLLL,

I tried to take encoder to tegra_multimedia_api_sample_pre_release in this topic https://devtalk.nvidia.com/default/topic/984850/jetson-tx1/how-to-convert-yuv-to-jpg-using-jpeg-encoder-hardware-/post/5048479/#5048479

And I got this problem

VENC: TryProcessingData: 2178:  VideoEncFeedImage failed. Input buffer 0 sent
VENC: NvMMLiteVideoEncDoWork: 2572: BlockSide error 0x2
NvVideoEnc: BlockError 
NvVideoEncTransferOutputBufferToBlock: DoWork failed line# 641 
NvVideoEnc: NvVideoEncTransferOutputBufferToBlock TransferBufferToBlock failed Line=652
ERROR: start_capture(): (line:584) Failed to enqueue VIC output plane
ERROR: main(): (line:653) Failed to start capturing
Capture plane callback
ERROR: encoderCapturePlaneDqCallback(): (line:481) Failed to dequeue buffer from encoder capture plane

My feeding code:

while (poll(fds, 1, 5000) > 0 && !ctx->got_error && !quit)
    {
        if (fds[0].revents & POLLIN) {
            struct v4l2_buffer v4l2_buf;

            // Dequeue camera buff
            memset(&v4l2_buf, 0, sizeof(v4l2_buf));

            v4l2_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
            v4l2_buf.memory = V4L2_MEMORY_DMABUF;
            if (ioctl(ctx->cam_fd, VIDIOC_DQBUF, &v4l2_buf) < 0) {
                ERROR_RETURN("Failed to dequeue camera buff: %s (%d)",
                        strerror(errno), errno);
            }

            ctx->frame++;

            // Push nv_buffer into conv output queue for conversion
            //pthread_mutex_lock(&ctx->queue_lock);
            //ctx->enc_output_plane_buf_queue->push(&ctx->g_buff[v4l2_buf.index]);
            //ctx->m_outputFile->write((char*) ctx->g_buff[v4l2_buf.index].start, ctx->g_buff[v4l2_buf.index].size);
            // Enqueue enc output plane
            struct v4l2_buffer v4l2_buf_enc;
            struct v4l2_plane planes_enc[MAX_PLANES];

            memset(&v4l2_buf_enc, 0, sizeof(v4l2_buf_enc));
            memset(planes_enc, 0, MAX_PLANES * sizeof(struct v4l2_plane));

            v4l2_buf_enc.m.planes = planes_enc;

            // Check if we need dqBuffer first
            if (bufferIndex < MAX_ENCODER_FRAMES &&
            ctx->m_VideoEncoder->output_plane.getNumQueuedBuffers() <
            ctx->m_VideoEncoder->output_plane.getNumBuffers()) {
                v4l2_buf_enc.index = bufferIndex++;
            } else {
                if (ctx->m_VideoEncoder->output_plane.dqBuffer(v4l2_buf_enc, NULL, NULL, 10) < 0) {
                   ERROR_RETURN("Failed to dqueue output plane buffer\n");
                }
                NvBufferDestroy(v4l2_buf_enc.m.planes[0].m.fd);
                printf ("Released frame %d\r\n", v4l2_buf_enc.m.planes[0].m.fd);
            }

            v4l2_buf_enc.m.planes[0].m.fd = (unsigned long)ctx->g_buff[v4l2_buf.index].dmabuff_fd;
            v4l2_buf_enc.m.planes[0].bytesused = 1;

            if (ctx->m_VideoEncoder->output_plane.qBuffer(v4l2_buf_enc, NULL) < 0) {
                abort(ctx);
                ERROR_RETURN("Failed to enqueue VIC output plane");
            }

            // It might be more reasonable to wait for the completion of
            // VIC processing before enqueue current buff. But VIC processing
            // time is far less than camera frame interval, so we probably
            // don't need such synchonization.
            if (ioctl(ctx->cam_fd, VIDIOC_QBUF, &v4l2_buf))
                ERROR_RETURN("Failed to queue camera buffers: %s (%d)",
                        strerror(errno), errno);
        }
    }

Thanks and Best Regards

Hi Vu,
Please refer to the attachment.

12_camera_v4l2_cuda_video_encode.zip (9.02 KB)

Hi DaneLLL,

Thanks, it work very well. When I tried to calculate processing time, the result is about 15ms/frame. So I think I can write to one mp4 file at 30fps. Here is my writing file:

typedef struct {
.....
std::ofstream *m_outputFile;
.....
}
ctx->m_outputFile = new std::ofstream("encode.mp4");
static bool
enc_capture_dqbuf_thread_callback(struct v4l2_buffer *v4l2_buf,
                                   NvBuffer * buffer, NvBuffer * shared_buffer,
                                   void *arg)
{
    ..................
    printf ("Processing Time : %f \r\n", time_span.count());
    ctx->m_outputFile->write((char*) buffer->planes[0].data, buffer->planes[0].bytesused);
}

After all I have my mp4 movie however the action in movie look so faster than outside. And if I tried record in 20 seconds then the movie only 4-5 seconds.
May I need synchronize when writing file?

Thanks and Best Regards,
Vu Nguyen

Hi Vu,
The output is h264 stream, not mp4.

Could you try to play it with 00_video_decode sample? There is ‘-fps’ option you can try to adjust frame rate. h264 stream does not have frame rate information and you need to assign it.

Hi DaneLLL,

Thank you for your information. I can try with 00_video_decode sample as well.
Can you let me know how to write h264 stream to mp4 container?

Best Regards,
Vu Nguyen

Hi Vu, you can use ffmpeg https://ffmpeg.org/

./ffmpeg -i output.h264 -vcodec copy h264.mp4

Hi DaneLLL, it worked. However it take a bit slower than it should be. Should we set any fps?.
Also when I tried to record about 10s then use ffmpeg to write to mp4 container, the video is about 30s. Do you think because encoder run so fast and we need synchronize?

And I want to use 2 cameras at the same time, how can I mix data from 2 cameras 1280x720 to 2560x720 before feeding to encoder?

Thanks and Best Regards,

Hi DaneLLL,

I realized that my cameras were running at 60HZ so I should take frame at 2Hz, the problem has been solved.
Now I can write to mp4 file with ffmpeg in C++ code.
Currently, I want to know if we can mix data from 2 cameras 1280x720 to 2560x720 before feeding to encoder.

Thanks and Best Regards,

Hi Vu,
We suggest create 2560x720 NvBuffer via NvBufferCreate() and combine frames via CUDA by replacing HandleEGLImage() with your CUDA code.

Hi DaneLLL,

I have never tried with CUDA before. Can you show me any related sample?

Thanks and Best Regards

Hi vu,
Please install CUDA samples via Jetpack and samples are in ~/NVIDIA_CUDA-8.0_Samples

And more information [url]http://docs.nvidia.com/cuda/cuda-samples/#axzz4dQeosNc2[/url]

Hi DaneLLL,
I tried to do it manually but I can not feed NvBuffer to encoder.

First, I created some NvBuffers for encoder.

bool HY_Encoder::PrepareEncoderBuffers() {                                                                         
    //Allocate encoder buffer                                                                                      
    this->e_buff = (enc_buffer *)malloc(V4L2_BUFFERS_NUM * sizeof(enc_buffer));                                    
    if (this->e_buff == NULL)                                                                                      
        ERROR_RETURN("Failed to allocate encoder buffer");                                                         
    
    
    //Create buffer and share it with encoder                                                                      
    for (unsigned int index = 0; index < V4L2_BUFFERS_NUM; index++)                                                
    {
        int fd;                                                                                                    
        NvBufferParams params = {0};
        
        this->e_buff[index].left_data_update = false;                                                              
        this->e_buff[index].right_data_update = false;                                                             
                                                                                                                   
        if (-1 == NvBufferCreate(&fd, 1280*2, 720,
                    NvBufferLayout_Pitch,                                                                          
                    this->GetNvbuffColorFmt(V4L2_PIX_FMT_UYVY)))                                                   
            ERROR_RETURN("Failed to create NvBuffer");                                                             
    
        this->e_buff[index].buffer.dmabuff_fd = fd;                                                                
                                                                                                                   
        if (-1 == NvBufferGetParams(fd, &params))                                                                  
            ERROR_RETURN("Failed to get NvBuffer parameters");                                                     
        
        // TODO add multi-planar support                                                                           
        // Currently it supports only YUV422 interlaced single-planar                                              
        this->e_buff[index].buffer.size = params.height[0] * params.pitch[0];                                      
        this->e_buff[index].buffer.start = (unsigned char *)mmap(                                                  
                NULL,
                this->e_buff[index].buffer.size,                                                                   
                PROT_READ | PROT_WRITE,                                                                            
                MAP_SHARED,                                                                                        
                this->e_buff[index].buffer.dmabuff_fd, 0);                                                         
        printf ("fd = %d, heigh = %d, width = %d, pitch = %d \r\n", fd, params.height[0], params.width[0], params.pitch[0]);
        
    }     
        
    return true;                                                                                                   
        
}

Then I put data to NvBuffer and feed to encoder directly, not from callback.

if (fds0[0].revents & POLLIN && fds1[0].revents & POLLIN) {
           count++;
           if (fds0[0].revents & POLLIN) {
               struct v4l2_buffer v4l2_buf;

               // Dequeue camera buff
               memset(&v4l2_buf, 0, sizeof(v4l2_buf));
               v4l2_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
               v4l2_buf.memory = V4L2_MEMORY_DMABUF;
               if (ioctl(ctx0->cam_fd, VIDIOC_DQBUF, &v4l2_buf) < 0)
                   ERROR_RETURN("Failed to dequeue camera buff: %s (%d)",
                           strerror(errno), errno);

               //Copy data to encoder buffer
               for (unsigned int i = 0; i < 720; i++)
               {
                  memcpy((unsigned char *) (this->e_buff[v4l2_buf.index].buffer.start + (i*2560*2)),
                         (unsigned char *) (ctx0->g_buff[v4l2_buf.index].start + i*2560), 2560);
                  this->e_buff[v4l2_buf.index].left_data_update = true;
               }

               if (this->e_buff[v4l2_buf.index].left_data_update &&
                   this->e_buff[v4l2_buf.index].right_data_update)
               {
                  struct v4l2_buffer enc_buf;
                  struct v4l2_plane planes[MAX_PLANES];

                  memset(&enc_buf, 0, sizeof(v4l2_buf));
                  memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

                  enc_buf.m.planes = planes;

                  enc_buf.m.planes[0].m.fd = this->e_buff[v4l2_buf.index].buffer.dmabuff_fd;
                  enc_buf.m.planes[0].bytesused = 1;
                  printf ("Enqueue 0 index%d fd%d \r\n", v4l2_buf.index, enc_buf.m.planes[0].m.fd);
                  if (ctx0->enc->output_plane.qBuffer(enc_buf, NULL) < 0)
                  {
                     abort(ctx0);
                     ERROR_RETURN("Failed to queue buffer on ENC output plane");
                  }
                  this->e_buff[v4l2_buf.index].left_data_update = false;
               }
               // Enqueue camera buff
               // It might be more reasonable to wait for the completion of
               // VIC processing before enqueue current buff. But VIC processing
               // time is far less than camera frame interval, so we probably
               // don't need such synchonization.
               if (ioctl(ctx0->cam_fd, VIDIOC_QBUF, &v4l2_buf))
                   ERROR_RETURN("Failed to queue camera buffers: %s (%d)",
                           strerror(errno), errno);
           }

And it got error like this

VENC: TryProcessingData: 2178:  VideoEncFeedImage failed. Input buffer 0 sent
VENC: NvMMLiteVideoEncDoWork: 2572: BlockSide error 0x2
NvVideoEnc: BlockError 
NvVideoEncTransferOutputBufferToBlock: DoWork failed line# 641 
NvVideoEnc: NvVideoEncTransferOutputBufferToBlock TransferBufferToBlock failed Line=652
Segmentation fault

I’m curios that can we feed NvBuffer directly to encoder not via callback?

Thanks

Hi Vu,
The encoder input has to be NvBufferColorFormat_YUV420, so you need the conversion.

Hi DaneLLL,

Thank for your information.
When I try to set input for encoder be NvBufferColorFormat_UYVY, it got error. So encoder input can be only NvBufferColorFormat_YUV420?
Now I can input NvBufferColorFormat_YUV420 to encoder. However mix data between 2 video converted frames still not easy to me.

Best Regards,