NvJPEGDecoder output as input to NvVideoEncoder

Hi all

I need to capture MJPEG from UVC webcam, process the data, and then encode into H.265 video stream. I am a little confused at the buffer sharing between the output of the NvJPEGDecoder and the input into the NvVideoEncoder.

I am using C++ V4L2 and Nv API modules. I am starting with Nv sample 12_camera_v4l2_cuda as a base. I cannot use GStreamer or Argus as I have too many odd things to do in the processing stage, so please do not refer me to Gstreamer.

So basically I would need this flow, UVC->DMABUF-> NvJPEGDecoder->MMAP-> NvVideoEncoder->MMAP.

But I understand that NvJPEGDecoder is an MMAP exporter and so is the NvVideoEncoder, so they cannot share MMAP Buffers. Is this true?

Can anyone guide me on how to achieve the most efficient way, or any way, to get buffers from the JPEG decoder to the NvVideoEncoder? Do I actually have to copy the MMap buffers output from NvJPEGDecoder to a DMABUF for the input into the NvVideoEncoder, or is there a way to share the buffer?

Anyone please???

Hi,
Please refer to
[url]https://devtalk.nvidia.com/default/topic/1062492/jetson-tx2/tegra-multimedia-samples-not-working-properly/post/5383923/#5383923[/url]

It demonstrates the case USB camera outputs in YUV422. You may apply it to MJPEG decoding case and it should work fine.

A known issue in MJPEG decoding on r32.2.1:
https://elinux.org/L4T_Jetson/r32.2.1_patch

Thanks heaps for your reply DaneLL, but I probably should have mentioned that I need the UVC webcam output to be MJPEG as I have 4 cameras to handle for the input. 4 x YUV output is too much data for at least 1080p 30fps. I only need to H.265 encode one stream which has the 4 cameras overplayed into it.

Thanks.

Hi,
The patch is pretty close to your usecase. You may refer to it and integrate into your usecase.

All good thanks DaneLLL I have it all working now.