In this link, we can access to gstreamer pipeline buffer and convert the frame buffers in numpy array, I want to know, How I can to accesses the frame buffers in GPU mem and then feed into my custom processor without convert frames into numpy array.
We have two solutions for using of deepstream decoder(efficient way than opencv+gstreamer):
the one way is we need to write custom element of processing and register in gstreamer and then put the custom element in the pipeline and then do processing on frames buffer. this way is good but need to write and knowledge gstreamer programming. this way is same way of deep stream.
the second way is we use only decoded of frames from that link, then passed the frames into custom processor units. for this part I have two question:
**1-**The loop of gstreamer is same as asyncio programming loop?
2- As you know, If we add additional operation into pad prob function, this cause drop performance, but I want to know, Is it possible to put the frames in the pad prob function and do loop.create_task(process(frame)) like async? this cause we here don’t wait to perform processing. like this:
def tiler_sink_pad_buffer_probe(pad,info,u_data): .... ### capture the frames in GPU buffer without converting into numpy loop.create_task(process(frame)) .... return Gst.PadProbeReturn.OK