We currently have a working pipeline for using a MJPEG camera with the H.265 encoder using the NvJPEGDecoder DecodeToBuffer and then using memcpy to resample from 4:2:2 to 4:2:0 and write into the output_plane of the NvVideoEncoder.
At 4k resolution we found that the DecodeToBuffer performance is too low so we want to change to use the DecodeToFd method instead. I’ve followed the 06_jpeg_decode example for putting the fd into the output_plane of the NvVideoConverter. I couldn’t find an example of feeling the output of an NvVideoConverter into an NvVideoEncoder so I used the example from 07_video_convert of one NvVideoConverter being connected to another NvVideoConverter. I used the capture_plane dqueue callback thread for the NvVideoConverter to try to put data into the output_plane of the NvVideoEncoder.
Does this sound like the correct approach?
I’ve run into the problem where I seem to run out of encoder output_plane buffers once it gets past getNumBuffers, it seems they never get released.
Has anyone implemented the pipeline of MJPEG V4L2 camera -> NvJpegDecoder -> DecodeToFd -> NvVideoConverter -> NvVideoEncoder before?
We’re using L4T 28.1, using default power mode and not running jetson_clocks.sh.