what's the deference between Nvidia video converter in tegra media low level api pixel format V4L2_PIX_FMT_YUV420M and stantard X264_CSP_I420

I save the V4L2_PIX_FMT_YUV420M pixel format picture data to file ,it couldn’t play with yuvplayer,and the player’s screan is full grean.what’s the difference between V4L2_PIX_FMT_YUV420M and libx264’s standard X264_CSP_I420?

Hi Allen, please share your command of using low level api.

The follow is the init api and capture function.
we found the V4L2_PIX_FMT_YUV420M pixel format picture data lenght is longer then X264_CSP_I420. Through searching On the internet, I found that Because V4L2_PIX_FMT_YUV420M is multi plane pixel format,and X264_CSP_I420 is single plane pixel format ,How can I translate V4L2_PIX_FMT_YUV420M to X264_CSP_I420.

bool ConsumerThread::createImageConverter()            
{
    int ret = 0;
    char cname[10];                                    

    sprintf(cname, "conv%d", cam_idx);                 
    // YUV420 --> RGB32 converter
    m_ImageConverter = NvVideoConverter::createVideoConverter(cname);
    if (!m_ImageConverter)                             
        ORIGINATE_ERROR("Could not create m_ImageConverteroder");

    if (DO_STAT)
        m_ImageConverter->enableProfiling();


    m_ImageConverter->capture_plane.
        setDQThreadCallback(converterCapturePlaneDqCallback);   
    m_ImageConverter->output_plane.
        setDQThreadCallback(converterOutputPlaneDqCallback);    


    ret = m_ImageConverter->setOutputPlaneFormat(V4L2_PIX_FMT_YUV420M, m_pContext->width,
                                    m_pContext->height, V4L2_NV_BUFFER_LAYOUT_BLOCKLINEAR); 
    if (ret < 0)
        ORIGINATE_ERROR("Could not set output plane format");   
----   
      //ret = m_ImageConverter->setCapturePlaneFormat(V4L2_PIX_FMT_NV12M, m_pContext->width,
    //ret = m_ImageConverter->setCapturePlaneFormat(V4L2_PIX_FMT_YUV444M, m_pContext->width,-
    //ret = m_ImageConverter->setCapturePlaneFormat(V4L2_PIX_FMT_ABGR32, m_pContext->width,
    ret = m_ImageConverter->setCapturePlaneFormat(V4L2_PIX_FMT_YUV420M, m_pContext->width,
    //ret = m_ImageConverter->setCapturePlaneFormat(V4L2_PIX_FMT_UYVY, m_pContext->width,
                                    m_pContext->height, V4L2_NV_BUFFER_LAYOUT_PITCH);       
    if (ret < 0)
        ORIGINATE_ERROR("Could not set capture plane format");  

#if 0
    ret = m_ImageConverter->setCropRect(480, 220, 1020, 828);
    //ret = m_ImageConverter->setCropRect(0, 0, 200, 200);
    if (ret < 0)
        ORIGINATE_ERROR("Could not set crop  rect");
#endif

    // Query, Export and Map the output plane buffers so that we can read
    // raw data into the buffers
    ret = m_ImageConverter->output_plane.setupPlane(V4L2_MEMORY_DMABUF, conv_buf_num, false, false);
    if (ret < 0)
        ORIGINATE_ERROR("Could not setup output plane");        

    // Query, Export and Map the output plane buffers so that we can write
    // m_ImageConverteroded data from the buffers
    ret = m_ImageConverter->capture_plane.setupPlane(V4L2_MEMORY_MMAP, conv_buf_num, true, false);
    if (ret < 0)
        ORIGINATE_ERROR("Could not setup capture plane");

    // Add all empty conv output plane buffers to m_ConvOutputPlaneBufQueue
    for (uint32_t i = 0; i < m_ImageConverter->output_plane.getNumBuffers(); i++)
    {
        m_ConvOutputPlaneBufQueue->push(
            m_ImageConverter->output_plane.getNthBuffer(i));
    }

    // conv output plane STREAMON
    ret = m_ImageConverter->output_plane.setStreamStatus(true);
    if (ret < 0)
        ORIGINATE_ERROR("fail to set conv output stream on");

    // conv capture plane STREAMON
    ret = m_ImageConverter->capture_plane.setStreamStatus(true);
    if (ret < 0)
        ORIGINATE_ERROR("fail to set conv capture stream on");

    // Start threads to dequeue buffers on conv capture plane,
    // conv output plane and capture plane
    m_ImageConverter->capture_plane.startDQThread(this);
    m_ImageConverter->output_plane.startDQThread(this);

    // Enqueue all empty conv capture plane buffers
    for (uint32_t i = 0; i < m_ImageConverter->capture_plane.getNumBuffers(); i++)
    {
        struct v4l2_buffer v4l2_buf;
        struct v4l2_plane planes[MAX_PLANES];

        memset(&v4l2_buf, 0, sizeof(v4l2_buf));
        memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

        v4l2_buf.index = i;
        v4l2_buf.m.planes = planes;

        ret = m_ImageConverter->capture_plane.qBuffer(v4l2_buf, NULL);
        if (ret < 0) {
            abort();
            ORIGINATE_ERROR("Error queueing buffer at conv capture plane");
        }
        printf(" i: %d\n", i);
    }

    printf("create vidoe converter return true\n");
    return true;
}
bool ConsumerThread::converterCapturePlaneDqCallback(  
    struct v4l2_buffer *v4l2_buf,
    NvBuffer * buffer,
    NvBuffer * shared_buffer,
    void *arg)
{
    ConsumerThread *thiz = (ConsumerThread*)arg;       
    camera_caffe_context *p_ctx = thiz->m_pContext;    
    int e;

    if (!v4l2_buf)                                     
    {                                                  
        REPORT_ERROR("Failed to dequeue buffer from conv capture plane");
        thiz->abort();
        return false;
    }

    if (v4l2_buf->m.planes[0].bytesused == 0)
    {
        return false;
    }





#ifdef RENDER
    m_renderer->render(buffer->planes[0].fd);
#endif
    static FILE* fp = NULL;
    if(!fp)
       fp =  fopen("yuv420p.yuv","wb+");
----
    printf("------------data size(%d)\n",buffer->planes[0].bytesused);
    fwrite((char *)buffer->planes[0].data, 1,buffer->planes[0].bytesused,fp);
    //fwrite((char *)thiz->showImg->imageData, 1,buffer->planes[0].bytesused,fp);
    e = thiz->m_ImageConverter->capture_plane.qBuffer(*v4l2_buf, NULL);
    if (e < 0)
        ORIGINATE_ERROR("qBuffer failed");

    return true;
}

Hi Allen, we don’t know much about X264_CSP_I420. For 1920x1080 in V4L2_PIX_FMT_YUV420M, size = 1920x1080x1.5. Is it the size you observed?

Not sure but you are dumping YUVs from camera source? Which sample do you use?

we use nvisp and ov4689 raw sensor, I set the pixel format V4L2_PIX_FMT_YUV420M,and then print out the raw V4L2_PIX_FMT_YUV420M data len, the lenght is as follow:

printf("------------data size(%d)\n",buffer->planes[0].bytesused);
------------data size(2228224)

which is not 192010801.5 = 3110400

Hi Allen, if it is sensor independent, it should also be reproduced via onboard ov5693. Could you try with ov5693 and share the result? Which sample code do you use? 10_camera_recording?

Hi, DaneLLL

I think it should be the same for OV5693 and OV4689, They generate data to NV ISP INPUT, then output the same format YUV420M. I think the video convert and NV ISP don’t know who connected to it. Only one thing the ISP cared about is the output data format from Sensors which is already the same(Raw RG10 10bit). the difference are main in sensor cfg. for ISP they should be the same.

Please refer to the sample based on ~/tegra_multimedia_api/samples/09_camera_jpeg_capture and run
./camera_jpeg_capture --pre-res 1920x1080
main.cpp (22.3 KB)