Multimedia API example using NvBuffer shared among several NvVideoConverter objects

Hello,
We have a use case which may require the use of a dma buffer shared among several downstream NvVideoConverter objects in a video pipeline implemented at the Multimedia API lievel. The use case would be similar to a gstreamer pipeline in which there is a nvarguscamerasrc plugin followed by an (nv)tee and the (nv)tee is then connected to several downstream nvvidconv plugins/elements.
Due to the way that nvarguscamerasrc is implemented we may have to use the low level Multimedia API instead of the gstreamer approach.
Questions then are :

  1. What is the correct way to implement something like an (nv)tee at the Multimedia API level. If we assume that dma buffer fds are exposed at user space, then it seems(??) that the buffers would be wrapped in an NvBuffer object. However, I am not sure exactly how the ref counting would be implemented. I assume that the data flow would be :
  • Have a thread that gets the dequeued dma buffer fds from the input camera source
  • On this thread then, dequeued dma buffer would be enqueued to each of the NvVidConverter outputput plane(s). So if I have 3 NvVideoConverter objects following my source, the dma fd would have to be queued on each of the NvVideoConverted objects output plane. Is this correct. At this point I am assuming that each time the dma df is queued the reference on the NvBuffer would be incremented.
  • Now on the capture plane of each of the NvVideoConverter obbjects, the NvBuffer should be unref’d ( ?? – I am not sure on this point). And at some point when all the NvVideoConverter objects are done processing the input dma fd ( indicated by the reference going to 0 ??) then the NvBuffer should be re-queued on the input device again ( I am not sure exactly what this process should be and I am seeking some guidance on this). If any example could be provided of such a scenario, this would be very helpful. I looked thru the examples for the Multimedia API and none of them seem to match exactly the scenario described above.
  1. Are the any limitations on the input NvBuffer dma fd being accessed by multiple NVVideoConverter elements in parallel ( separate threads though )
  2. Is there a recommended way to reserve (possibly ) contiguous DMA memory for the set of NvV4l2Elements that will be used in our pipelines to make sure that enough DMA’able memory is available. Our specific use case prioritizes video pipelines/processing over all other functionality. Is this done thru nodes in the dts(i) files for the tegra platform

Thanks
Victor

Hi,
We have reference samples in

/usr/src/jetson_multimedia_api/samples/09_camera_jpeg_capture/
/usr/src/jetson_multimedia_api/samples/10_camera_recording/

You should see the samples after installation through SDKManager.

For doing conversion, we recommend use NVvBuffer APIs defined in nvbuf_utils. NvBufferTransform() is the API for format conversion/cropping/scaling. A easy-to-use interface.

For more information, please look at
https://docs.nvidia.com/jetson/archives/l4t-multimedia-archived/l4t-multimedia-3231/index.html