• Hardware Platform (Jetson / GPU)
nx and 3090
• DeepStream Version
6.1.1
• JetPack Version (valid for Jetson only)
5.0.2
Hello, I don’t want to use gst-nvifer. I use fakesink to get n_frame,inference using pycuda and tensorrt, the following code can run normally after np.array is copied to the CPU,How to inference without copying to the CPU? thanks
bufferPad = self.fakesink.get_static_pad("sink")
bufferPad.add_probe(Gst.PadProbeType.BUFFER,self.buffer_probe, None)
in self.buffer_probe
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
n_frame_copy = np.array(n_frame, copy=True, order='C')
tensorrt inference
self.inputs[0].host = frame.img
[cuda.memcpy_htod_async(inp.device, inp.host, self.stream) for inp in self.inputs]