I’m trying to develop an efficient jpeg decoder based on
jetson multimedia api. Decoded image needs to be finally in RGB color space. I’m using AGX Xavier. My current approach consists of the following steps:
- decode jpeg using either
VPIImage in order to convert it from YUV to RGB
Looking into example provided in
jetson_multimedia_api/samples/06_jpeg_decode it looks to me like:
decodeToFd() method is fast (~4ms for Full HD) but getting the decoded data to
NvVideoConverter takes ~25ms which is way too long.
decodeToBuffer() is slower (~13ms for Full HD) and it ends up writing results to user-space
V4L2_MEMORY_USERPTR memory type) which cannot be wrapped into
I’m looking for any ideas on how to best approach the topic. In particular:
- how to get
decodeToFd() more efficiently
- how to convert it form YUV to RGB in an efficient way
Any help appreciated.
A possible solution will be like:
- Call decodeToFd() to get NvBuffer in YUV420
- Call NvBufferTransform() to get NvBuffer in RGBA
- Create CUDA buffer in BGR and implement CUDA code to convert NvBuffer in RGBA to the CUDA buffer in BGR
For maximum throughput of NvBufferTransform(), please execute the steps:
Nvvideoconvert issue, nvvideoconvert in DS4 is better than Ds5? - #3 by DaneLLL
@DaneLLL Thanks for posting. The question now is how to get
decodeToFd(). By calling
decodeToFd() I’m getting FD and I don’t know how to “wrap” it into
NvBuffer. One way is to use
NvVideoConverter as in the
06_jpeg_decode sample, but as pointed out this takes a lot of time (~25ms). Is there any other way to get the data into
Please use NvBufferTransform() instead of NvVideoConverter. In 12_camera_v4l2_cuda, there is code for MJPG decoding which is similar to this use-case. Please take a look.
The workflow you proposed works very well and much faster than with
NvVideoConverter. Note that RGBA → RGB can also be done in by
VPI_BACKEND_CUDA and it works just fine too.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.