Hello,
We have a CUDA-based SDK that produces buffers in GPU memory (via cudaMalloc2D cudaMallocPitch, for example). We would like to hook it up to GStreamer on the Xavier. To do this, I would like to wrap our SDK in a GStreamer source. We also have the possibility of passing these buffers out via an EGL Stream.
I have tried the method described here, and it works:
https://github.com/DaneLLL/gstreamer_eglstreamsrc
However, the method above requires a custom launcher program that creates the EGL Producer and EGL Stream explicitly, thus preventing me from using passing a pure launch string to the stock gst-launch-1.0 program. Furthermore, Iām not very comfortable with the producer codeās delays that ensure that the Producer connects to the stream after the Consumer - feels like a race condition.
(Ultimately, I would like my data to be available via RTSP, and gst-rtsp-serverās default factory requires a pure launch string, or that I create my own RtspMediaFactory subclass).
For this reason, I have decided to try to implement my own GStreamer source. I want to keep my moduleās output in GPU memory for efficiency reasons, so I think I need it to support the āmemory:NVMMā property (i.e. āvideo/x-raw(memory:NVMM)ā).
I have created a GstBaseSrc subclass, but I donāt know how to implement the _create function for a module that publishes GPU buffers. I am not sure if there is some sort of pre-established standard to follow to implement the āmemory:NVMMā capability or what to use to allocate GPU buffers in such a way that downstream GStreamer modules will know that the buffer is in GPU memory. Is it as simple as calling cudaMallocPitch?
Does anyone know of an example of publically-available source code for a GStreamer video source showing how to handle the āmemory:NVMMā property correctly and call the appropriate allocation function?
It would be great if we could see how libnveglstreamsrc.so was implemented, for example, but I donāt know if it is publically available.