How to access to image array in GPU buffer without copy to CPU buffer?

• Hardware Platform (Jetson / GPU):Jetson
• DeepStream Version:5.0
• JetPack Version (valid for Jetson only):4.4
• TensorRT Version:7.x

Hi,
I want to access image array from gstreamer buffer and the feed to the custom model, and I see this example,
But one big problem of this example is that copying frame from GPU buffer to CPU buffer, But I don’t want to copy to CPU buffer, because I have to convert to Tensor on CUDA and then feed to model.
I want to have image array in GPU buffer without copy to CPU buffer.

I write custom gstreamer plugin with chain function, and do custom inference model in that plugin.

In that example for copying image array into CPU buffer is used numpy lib, I want to know is it possible to use CuPy lib or other ML framework to access to image array in GPU memory?

Sorry to tell that we currently don’t have bindings for GPU buffer. It is on our roadmap for future release.

Thanks