Deepstream surface with pycuda

• Hardware Platform (Jetson / GPU)
nx and 3090
• DeepStream Version
• JetPack Version (valid for Jetson only)

Hello, I don’t want to use gst-nvifer. I use fakesink to get n_frame,inference using pycuda and tensorrt, the following code can run normally after np.array is copied to the CPU,How to inference without copying to the CPU? thanks

 bufferPad = self.fakesink.get_static_pad("sink")
 bufferPad.add_probe(Gst.PadProbeType.BUFFER,self.buffer_probe, None)

in self.buffer_probe
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
n_frame_copy = np.array(n_frame, copy=True, order='C')

tensorrt inference
self.inputs[0].host = frame.img
[cuda.memcpy_htod_async(inp.device,, for inp in self.inputs]

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.

The data format is described in NvBufSurface — Deepstream Deepstream Version: 6.1.1 documentation and NvBufSurfaceParams — Deepstream Deepstream Version: 6.1.1 documentation

And there is way to get the tensor data with numpy * URGENT * How to convert Deepstream tensor to Numpy? - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.