NvJPEGEncoder - Converting from OpenCV Mat

Hey all,

I’m trying to encode a JPEG from an OpenCV Mat, using NvJPEGEncoder. I think I have the basics working, but the resulting JPG is mangled in a particular way. I’m guessing that I’m using the wrong YUV formatting.

I’ve attached the resulting JPEG, as well as one encoded correctly (from the same frame). The difference will probably give us a clue, but I can’t figure it out…

Below is my (test) code.

Thank you!

void encodeJpeg(Mat rgb) {
    // Convert an RGB Mat to a YUV Mat
    // I've also tried cv::COLOR_RGB2YUV_I420, but then the raw-data length is incorrect
    cv::Mat yuv;
    cv::cvtColor(rgb, yuv, cv::COLOR_RGB2YUV);

    // Convert YUV Mat to NvBuffer
    NvBuffer nvbuf(V4L2_PIX_FMT_YUV420M, yuv.cols, yuv.rows, 0);
    nvbuf.allocateMemory();
    auto ret = read_video_frame((const char*)yuv.data, yuv.total()*yuv.elemSize(), nvbuf);
    if(ret < 0) log_error("read_video_frame error");

    // Encode JPEG
    unique_ptr<NvJPEGEncoder> jpegenc( NvJPEGEncoder::createJPEGEncoder("jpenenc") );
    if(!jpegenc) log_error("jpegenc error");
    unsigned long out_buf_size = yuv.cols * yuv.rows * 3 / 2;
    unsigned char *out_buf = new unsigned char[out_buf_size];
    ret = jpegenc->encodeFromBuffer(nvbuf, JCS_YCbCr, &out_buf, out_buf_size, 40);
    if(ret < 0) log_error("encodeFromBuffer error");

    // Save to disk
    std::ofstream out_file("test.jpg");
    out_file.write((char *) out_buf, out_buf_size);
}

// read_video_frame - Inspired by the version provided in NvUtils
int read_video_frame(const char* inpBuf, unsigned inpBufLen, NvBuffer & buffer)
{
    log_trace("read_video_frame: "+to_string(inpBufLen)+" vs "+to_string(buffer.planes[0].fmt.bytesperpixel * buffer.planes[0].fmt.width * buffer.planes[0].fmt.height * buffer.n_planes));

    uint32_t i, j;
    char *data;

    for (i = 0; i < buffer.n_planes; i++)
    {
        NvBuffer::NvBufferPlane &plane = buffer.planes[i];
        std::streamsize bytes_to_read =
            plane.fmt.bytesperpixel * plane.fmt.width;
        data = (char *) plane.data;
        plane.bytesused = 0;
        for (j = 0; j < plane.fmt.height; j++)
        {
            unsigned numRead = min((unsigned)bytes_to_read, (unsigned)inpBufLen);
            
            memcpy(data, inpBuf, numRead);

            if (numRead < bytes_to_read) {
                log_error(to_string(bytes_to_read)+" vs "+to_string(inpBufLen));
                return -1;
            }

            inpBuf    += numRead;
            inpBufLen -= numRead;

            data += plane.fmt.stride;
        }
        plane.bytesused = plane.fmt.stride * plane.fmt.height;
    }
    return 0;
}


PS: Not sure if it’s relevant, but I notice that I’m getting the following on the stdout when I try to encode:

nvbuf_utils: dmabuf_fd -1 mapped entry NOT found

Actually got it working just after posting this. The solution:

  • Use COLOR_RGB2YUV_I420
  • Don't use yuv.rows & yuv.cols as above, but instead use the width and height of the original image. The yuv mat's rows/cols no longer represent that (duh).

Still currious about that dmabuf_fd log (which I’m still getting)

hi logidelic,
I can not find the function below .

ret = read_video_frame((const char*)yuv.data, yuv.total()*yuv.elemSize(), nvbuf);

only find the functin :

int read_video_frame(std::ifstream * stream, NvBuffer &buffer )

I want to know how to copy the data to nvbuffer.

thank you!

I had written that function but omitted it from the post. Here it is:

int read_video_frame(const char* inpBuf, unsigned inpBufLen, NvBuffer& buffer)
  {
      uint32_t i, j;
      char *data;

      for (i = 0; i < buffer.n_planes; i++)
      {
          NvBuffer::NvBufferPlane &plane = buffer.planes[i];
          std::streamsize bytes_to_read =
              plane.fmt.bytesperpixel * plane.fmt.width;
          data = (char *) plane.data;
          plane.bytesused = 0;
          for (j = 0; j < plane.fmt.height; j++)
          {
              unsigned numRead = min((unsigned)bytes_to_read, (unsigned)inpBufLen);
              
              memcpy(data, inpBuf, numRead);

              if (numRead < bytes_to_read) {
                  return -1;
              }

              inpBuf    += numRead;
              inpBufLen -= numRead;

              data += plane.fmt.stride;
          }
          plane.bytesused = plane.fmt.stride * plane.fmt.height;
      }
      return 0;
  }

hi logidelic,
Thank you for your code ,I will try this function.

I am trying to understand the reason why out_buf_size and outbuf are allocated prior passing into the encodeFromBuffer. How is the buffer size determine?