Video Image Compositor API

According to it should be possible to use the VIC to offload distortion correction and other operations from the CPU. However, I don’t see any API for doing this - is this exposed somewhere and I just can’t find the docs?


Please refer to APIs in tegra_multimedia_api/include/nvbuf_utils.h

 * Composites multiple input DMA buffers to one output DMA buffer.
 * This function can support composition of multiple input frames to one composited output.
 * @param[in] src_dmabuf_fds array of DMABUF FDs of source buffers to composite from
 * @param[in] dst_dmabuf_fd DMABUF FD of destination buffer for composition
 * @param[in] composite_params composition parameters
 * @return 0 for success, -1 for failure.
int NvBufferComposite (int *src_dmabuf_fds, int dst_dmabuf_fd, NvBufferCompositeParams *composite_params);

Updated link:

I’m still missing how this would let me do things like barrel distortion correction - AFAIK that API just allows you to merge various ROIs from input buffers and place them into an output buffer?


This is not supported.

You may start with tegra_multimedia_api samples first. The samples are installed to /usr/src/ through SDK Manager.

A couple of suggestions:

  • Use OpenGL shaders to do the correction. GStreamer has great OpenGL support and in Xavier you can achieve full hd resolutions easily.
  • Use nvivafilter and do the correction via CUDA. I’m sure there are a lot of open reference implementations you can refer to.

We’re already putting a pretty heavy load on the GPU for our ML pipelines, so we were looking to offload to the available hardware. Thanks for the suggestions though.