Hello nvidia forum !
I’m currently working on a project were we’re developping custom cameras, and exploit them through a Xavier AGX’s PCIe. For that matter we are building a V4L2 driver and a gstreamer plugin inspired by the gstnvv4l2camerasrc sample.
The idea behind those choices is to use NvBuffer
s into our DMABUF transfers: so we can get our frames directly into GPU memory (NVMM) and leverage physical encoders and cuda features without any copy. This is kind of a strong requirement since our projected use cases will most certainly need all the bandwith Xavier has to offer.
We’re concerned about the few pixel formats proposed in NvBufferCreate
API (nvbuffer_utils.h). Those are far from being exhaustive and we’d like to cover some that are not listed in enum NvBufferColorFormat
. Some cameras might only be able to support a particular format, in this case we won’t be able to get optimal perfomances on video transfers / processings.
Do those limitations come from hardware constraints ? The Technical Reference Manual lists a whole bunch of Pixel formats in “Table 7.39 Pixel Format Support”. How can we work with those? What is the NvBufferColorFormat
purpose in NvBufferParams
? Is it supposed to carry data description for the libraries / gstreamer plugins to rely on for their image processing ?
Is there a way to allocate GPU memory without being constrained by this NvBufferColorFormat
enumeration, specify our own pixel format during allocation and then do what we need with cuda or other transfromation / encoding etc… tools or libraries ?
Best regards,
Jean-Christophe BEGUE