I’m trying to develop an efficient jpeg decoder based on jetson multimedia api. Decoded image needs to be finally in RGB color space. I’m using AGX Xavier. My current approach consists of the following steps:
decode jpeg using either decodeToFd() or decodeToBuffer() into NvBuffer
wrap NvBuffer into VPIImage in order to convert it from YUV to RGB
Looking into example provided in jetson_multimedia_api/samples/06_jpeg_decode it looks to me like:
decodeToFd() method is fast (~4ms for Full HD) but getting the decoded data to NvBuffer through NvVideoConverter takes ~25ms which is way too long.
decodeToBuffer() is slower (~13ms for Full HD) and it ends up writing results to user-space NvBuffer (V4L2_MEMORY_USERPTR memory type) which cannot be wrapped into VPIImage.
I’m looking for any ideas on how to best approach the topic. In particular:
how to get NvBuffer from decodeToFd() more efficiently
how to convert it form YUV to RGB in an efficient way
@DaneLLL Thanks for posting. The question now is how to get NvBuffer from decodeToFd(). By calling decodeToFd() I’m getting FD and I don’t know how to “wrap” it into NvBuffer. One way is to use NvVideoConverter as in the 06_jpeg_decode sample, but as pointed out this takes a lot of time (~25ms). Is there any other way to get the data into NvBuffer?
Hi,
Please use NvBufferTransform() instead of NvVideoConverter. In 12_camera_v4l2_cuda, there is code for MJPG decoding which is similar to this use-case. Please take a look.
The workflow you proposed works very well and much faster than with NvVideoConverter. Note that RGBA → RGB can also be done in by vpiSubmitConvertImageFormat with VPI_BACKEND_CUDA and it works just fine too.