We have a CUDA-based SDK that produces buffers in GPU memory (via
cudaMalloc2D cudaMallocPitch, for example). We would like to hook it up to GStreamer on the Xavier. To do this, I would like to wrap our SDK in a GStreamer source. We also have the possibility of passing these buffers out via an EGL Stream.
I have tried the method described here, and it works:
However, the method above requires a custom launcher program that creates the EGL Producer and EGL Stream explicitly, thus preventing me from using passing a pure launch string to the stock gst-launch-1.0 program. Furthermore, I’m not very comfortable with the producer code’s delays that ensure that the Producer connects to the stream after the Consumer - feels like a race condition.
(Ultimately, I would like my data to be available via RTSP, and gst-rtsp-server’s default factory requires a pure launch string, or that I create my own RtspMediaFactory subclass).
For this reason, I have decided to try to implement my own GStreamer source. I want to keep my module’s output in GPU memory for efficiency reasons, so I think I need it to support the “memory:NVMM” property (i.e. “video/x-raw(memory:NVMM)”).
I have created a GstBaseSrc subclass, but I don’t know how to implement the _create function for a module that publishes GPU buffers. I am not sure if there is some sort of pre-established standard to follow to implement the “memory:NVMM” capability or what to use to allocate GPU buffers in such a way that downstream GStreamer modules will know that the buffer is in GPU memory. Is it as simple as calling cudaMallocPitch?
Does anyone know of an example of publically-available source code for a GStreamer video source showing how to handle the “memory:NVMM” property correctly and call the appropriate allocation function?
It would be great if we could see how libnveglstreamsrc.so was implemented, for example, but I don’t know if it is publically available.