Jpeg Encode Grayscale Image

Hi,

I’ve implemented a hand-crafted JpegEncoder class using NvJpegEncoder class installed with tegra_multimedia_api as reference. As stated in documentation and comments in the sample code, I’m defining TEGRA_ACCELERATE macro to use my GPU, but I’m not getting any different (better) results when I don’t define it. I’ve used tegrastats utility to see if the GPU is being used, or not, and I saw that GPU usage is 0. CPU usage, on the other hand, is about 30%.

Furthermore, I’ve checked the jpeglib.h header file to see where this TEGRA_ACCELERATE macro is being used, and there is a function named jpeg_set_hardware_acceleration_parameters_enc. When I don’t define TEGRA_ACCELERATE, I can still use this function since I’m getting no compilation error.

Here is the code I implemented:

  std::memset(&cinfo, 0, sizeof(cinfo));
  std::memset(&jerr, 0, sizeof(jerr));
  cinfo.err = jpeg_std_error(&jerr);

  jpeg_create_compress(&cinfo);
  cinfo.image_width = imageWidth;
  cinfo.image_height = imageHeight;
  cinfo.input_components = 1;
  cinfo.in_color_space = JCS_GRAYSCALE;

  jpeg_suppress_tables(&cinfo, TRUE);

unsigned char *outputBuffer = nullptr;
unsigned long outputBufferSize = 0;

jpeg_mem_dest(&cinfo, &outputBuffer, &outputBufferSize);

jpeg_set_defaults(&cinfo);
jpeg_set_quality(&cinfo, quality, TRUE);
jpeg_set_hardware_acceleration_parameters_enc(&cinfo, TRUE, outputBufferSize, imageWidth, imageHeight);
jpeg_start_compress(&cinfo, TRUE);
if (cinfo.err->msg_code)
{
  char err_string[256];
  cinfo.err->format_message((j_common_ptr) &cinfo, err_string);
  throw std::runtime_error(err_string);
}

JSAMPROW row_pointer[1];
// Encode
while (cinfo.next_scanline < cinfo.image_height) {
  row_pointer[0] = (unsigned char*)&imgData[cinfo.next_scanline * imageWidth];
  jpeg_write_scanlines(&cinfo, row_pointer, 1);
}

jpeg_finish_compress(&cinfo);

std::string out;
out.resize(outputBufferSize);

memcpy(&out[0], outputBuffer, outputBufferSize);
free(outputBuffer);

return out;

Any help would be appreciated.

Murat

3 Likes

I’m wondering the answer too

1 Like

We also faced the same problem. Any solutions?

So, I’ve tried the NvJpegEncoder class as is, but I get Tegra Acceleration failed error message from the function jpeg_finish_compress in line number 270. It’s obvious that the only color space supported is JCS_YCbCr.

Is there any way to encode a grayscale image harnessing the power of HW?

Hi,
Gray format is not supported by hardware NVJPG engine. You may need to use software decoder, or allocate YUV420 with null UY plane.

For filling the UY plane, please refer to nvbuff_do_clearchroma() in

/usr/src/jetson_multimedia_api/samples/12_camera_v4l2_cuda/

Thanks for the reply. I tried to use the sample code you refer, but failed to use it properly. I’m still getting the same error message. Is there any detailed documentation or flow diagram of the API to make a proper jpeg encoding?

Hi,
Please refer to

/usr/src/tegra_multimedia_api/samples/05_jpeg_encode/

to use NvJpegEncoder. The default format is YUV420. You can put the gray data in Y plane and fill in U/V planes with nvbuff_do_clearchroma().

After the installation through SDKManager, you should see the samples.

Thanks for the reply. That’s exactly what I’m doing but getting the same error. Here is my updated code:

  unsigned long out_buf_size = 0;
  unsigned char *out_buf = nullptr;

  memcpy(virtualAddress, &imgData[0], imgData.size());
  if(NvBufferMemSyncForCpu(dmaBufferFd, 0, (void**)&virtualAddress) == -1)
  {
    throw std::runtime_error("NvBufferMemSyncForCpu failed");
  }

  if(Raw2NvBuffer(virtualAddress, 0, imageWidth, imageHeight, dmaBufferFd) == -1)
  {
    throw std::runtime_error("Raw2NvBuffer failed");
  }
  
  if(!nvbuff_do_clearchroma(dmaBufferFd))
  {
    throw std::runtime_error("nvbuff_do_clearchroma failed");
  }

  encoder->encodeFromFd(dmaBufferFd, JCS_YCbCr, &out_buf, out_buf_size, quality);

  std::string out(out_buf_size, ' ');
  memcpy(&out[0], out_buf, out_buf_size);

I’m using std::string as a container for my image data. I know it’s not the best practice but bare with me :) I’m getting Tegra Acceleration failed error from encodeFromFd function in line 115 in NvJpegEncoder.cpp file.

Thanks,

Hi,
Please remove NvBufferMemSyncForCpu() and try again.
virtualAddress in NvBufferMemSyncForCpu() should be the return of NvBufferMemMap(). The way you call it looks not right.

Hi,

Actually I mislead you due to missing piece of code. Here is the complete code:

 L4tJpegEncoder::L4tJpegEncoder(const uint16_t imageWidth, const uint16_t imageHeight, const uint8_t quality)
  : imageWidth(imageWidth)
  , imageHeight(imageHeight)
  , quality(quality)
{
  encoder = NvJPEGEncoder::createJPEGEncoder("L4tJpegEncoder");
  NvBufferCreateParams input_params;
  memset(&input_params, 0, sizeof(input_params));
  input_params.payloadType = NvBufferPayload_SurfArray;
  input_params.width = imageWidth;
  input_params.height = imageHeight;
  input_params.layout = NvBufferLayout_Pitch;
  input_params.nvbuf_tag = NvBufferTag_JPEG;
  input_params.colorFormat = NvBufferColorFormat_YUV420;

  if( NvBufferCreateEx(&dmaBufferFd, &input_params) == -1)
  {
    throw std::runtime_error("HW Buffer creation failed");
  }

  if(NvBufferMemMap(dmaBufferFd, 0, NvBufferMem_Read_Write, (void**)&virtualAddress) == -1)
  {
    throw std::runtime_error("HW Buffer mapping failed");
  }
}

std::string L4tJpegEncoder::execute(std::string imgData)
{
  unsigned long out_buf_size = 0;
  unsigned char *out_buf = nullptr;

  if(NvBufferMemSyncForCpu(dmaBufferFd, 0, (void**)&virtualAddress))
  {
    throw std::runtime_error("NvBufferMemSyncForCpu failed");
  }

  memcpy(virtualAddress, &imgData[0], imgData.size());

  if(!nvbuff_do_clearchroma(dmaBufferFd))
  {
    throw std::runtime_error("nvbuff_do_clearchroma failed");
  }

  if(Raw2NvBuffer(virtualAddress, 0, imageWidth, imageHeight, dmaBufferFd) == -1)
  {
    throw std::runtime_error("Raw2NvBuffer failed");
  }

  if(NvBufferMemSyncForDevice(dmaBufferFd, 0, (void**)&virtualAddress) == -1)
  {
    throw std::runtime_error("NvBufferMemSyncForCpu failed");
  }

  encoder->encodeFromFd(dmaBufferFd, JCS_YCbCr, &out_buf, out_buf_size, quality);

  std::string out(out_buf_size, ' ');
  memcpy(&out[0], out_buf, out_buf_size);
  free(out_buf);
  return out;
}

Hi,
Please try the default sample first. See if you can run it successfully.

nvidia@Xavier8G:/usr/src/jetson_multimedia_api/samples/05_jpeg_encode$ gst-launch-1.0 videotestsrc num-buffers=1 ! filesink location= ~/a.yuv
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:00.000224168
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
nvidia@Xavier8G:/usr/src/jetson_multimedia_api/samples/05_jpeg_encode$ ./jpeg_encode /home/nvidia/a.yuv 320 240 /home/nvidia/a.jpg
libv4l2_nvvidconv (0):(802) (INFO) : Allocating (1) OUTPUT PLANE BUFFERS Layout=0
libv4l2_nvvidconv (0):(818) (INFO) : Allocating (1) CAPTURE PLANE BUFFERS Layout=1
App run was successful