Regarding NvJPEG encode/decode using FD

I have been able to compile and try the NvJPEG encode and decode examples provided. But there is an issue, these examples assume I have a jpeg image for decoding which saves the output as a yuv file and for encoding it takes input a yuv file which save’s the image as a jpeg file. I would like to to encode a RGB image and save as a jpeg. Similarly I would like to decode a jpeg image and get a RGB image.

My use case is as follows, I have a ROS1 bag file which has images stored in compressed image message format. Currently, I am able extract these images from the bag file and save them using OpenCV in jpeg format. These images are high resolution and I have a lot of them. I would like to instead use NvJPEG library to encode and save these BGR/RGB images in jpeg format. Similarly, later I would like to read these save jpeg images using NvJPEG decode and pass this RGB data to models for inference.

I think I can convert the RGB image to YUV and pass it to NvJPEG to encode should work and while decoding I can convert the YUV to RGB image. But I would like to know if there are better ways to do this?

Hi,
The hardware encoder and decoder do not support RGB, so you would need to convert to YUV420 for encoding. Your solution looks good if you would like to use hardware codec. Or may use software encoder and decoder.

Hi @DaneLLL , I investigated bit further on this. My plan was to use VPI for color conversion. Since jpeg encoding and decoding when done via hardware (File Descriptor) it uses NvBuffer. Uptill VPI 1.2 there was interoperability of NvBuffer and VPI but as of VPI 2.3 I see it is no longer supported. Is there some way around this issue?

Hi,
There is a possible solution in the post:
Hardware Accelerated JPEG encode/decode on Jetson Xavier JP 5.1.3 - #4 by DaneLLL

Please take a look and see if it can be applied to your use-case. Since hardware JPEG encoder does not support BGR, the color conversion is required.

Hi @DaneLLL

Okay. I want to focus on encoding as of now. I have an RGB image which I can convert to YUV420. This is done using OpenCV & Python. My plan is to submit this YUV420 image to NvJPEG encoder code which is in C++ via pyBind11. My question is how can we allocate this YUV420 image to NvBuffer? Can we use memcpy? I cannot use OpenCV GPU Mat in this case as shown in the link you shared. I have a sample pseudo-code below of how I think of tackling this.

void jpeg_encode_main(py::array_t<uint8_t> img_data, /* other parameters */, int width, int height) {
    // Assuming context setup and initializations are done before this

    auto buf_info = img_data.request(); // Request buffer info from the numpy array
    NvBuffer buffer(V4L2_PIX_FMT_YUV420M, width, height, 0);
    buffer.allocateMemory();
    // Copy numpy data to NvBuffer
    std::memcpy(buffer.planes[0].data, buf_info.ptr, width * height * 3 / 2); // Assuming YUV420 format

    // Encode the buffer to JPEG
    for (int i = 0; i < iterator_num; ++i) {
        ret = ctx.jpegenc->encodeFromBuffer(buffer, JCS_YCbCr, &out_buf,
                                            out_buf_size, ctx.quality);
    }

    // Post-processing: timing, writing, cleanup
}

Hi,
You can call Raw2NvBufSurface() to copy the data to NvBufSurface and call encodeFromFd()

hi @DaneLLL

I got NvBufSurface allocated from OpenCV mat in python via Pybind11 but I see no way to convert NvBufSurface to NvBuffer or get a FD from the allocated NvBufSurface. I came across your post where you yourself have mentioned that " NvBufSurface is not support in `NvJPEGEncoder’". I spent considerable amount of time on this and so please suggest solution. The error I am currently facing

error: cannot convert ‘NvBufSurface*’ to ‘NvBuffer&’
   72 |     ret = jpegenc->encodeFromBuffer(nvBufSurface, JCS_YCbCr, &out_buf, out_buf_size, quality);
      |                                     ^~~~~~~~~~~~
      |                                     |
      |                                     NvBufSurface*

Also please note I am not using deepstream in anyway. This is a custom pipeline I am creating since we are using ROS.

Hi @DaneLLL

Through this post on the forum, I understood how to convert from OpnenCV mat to NvBuffer. Now I am able to save encoded image using encodeFromBuffer . Still I would like to understand how can I use encodeFromFd. Where can I find the NvUtils.cpp so I can understand what read_dmabuf is doing?

Hi,
The cpp file is in

/usr/src/jetson_multimedia_api/samples/common/classes/NvUtils.cpp

Please take a look.