(I hope this is the right board for this question - I’m usually on the Xavier AGX boards, but there seem to be more GStreamer-related posts here)
My main objective is this: We have an SDK that produces images in GPU memory. Our code grabs from several cameras and combines the information to produce images or data. We would like to connect our SDK to GStreamer so we can connect it to existing features such as encoding, display, audio capture, etc. To maximize performance, we want to keep the images in GPU memory. To achieve this, we would like to wrap our SDK in a GStreamer source so that it can be treated like any other GStreamer block.
In summary, I would like to find some source code examples of GStreamer sources that emit video directly to the GPU memory so that I can do the same. Later, we may eventually want to create input-output blocks that transform data in GPU memory as well.
I have managed to create a GstBaseSrc subclass using gst-element-maker, but I got stuck on the _create() implementation – I don’t have any examples that show the “standard” way of allocating buffers in GPU memory.
NVIDIA’s Advanced GStreamer User Guide lists many GStreamer elements available on the Xavier (nvarguscamerasrc, nveglstreamsrc, nvvidconv, nvivafilter) which operate on GPU memory. They seem to communicate this ability by adding ‘(memory:NVMM)’ after the usual ‘video/x-raw’ capability. I can’t find any mention of “memory:NVMM” in the standard GStreamer code, so I don’t have any examples to work with. I’m guessing it’s something that is specific to NVIDIA GStreamer elements.
Ideally, I would like to know if NVIDIA have published any examples that show how this is done, or if maybe there is a different way of passing data in the GPU that I need to use instead.