I need to capture MJPEG from UVC webcam, process the data, and then encode into H.265 video stream. I am a little confused at the buffer sharing between the output of the NvJPEGDecoder and the input into the NvVideoEncoder.
I am using C++ V4L2 and Nv API modules. I am starting with Nv sample 12_camera_v4l2_cuda as a base. I cannot use GStreamer or Argus as I have too many odd things to do in the processing stage, so please do not refer me to Gstreamer.
So basically I would need this flow, UVC->DMABUF-> NvJPEGDecoder->MMAP-> NvVideoEncoder->MMAP.
But I understand that NvJPEGDecoder is an MMAP exporter and so is the NvVideoEncoder, so they cannot share MMAP Buffers. Is this true?
Can anyone guide me on how to achieve the most efficient way, or any way, to get buffers from the JPEG decoder to the NvVideoEncoder? Do I actually have to copy the MMap buffers output from NvJPEGDecoder to a DMABUF for the input into the NvVideoEncoder, or is there a way to share the buffer?