Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)
**• DeepStream Version 6.2
**• JetPack Version (valid for Jetson only) 5.1.1
Hello,
I am traying to run an engine model that modifies the video itself and the output tensor is rgb frames , but the output comes out in the tensor metadata and then i have to take the frame from the tensor meta and paste it in the
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
I know there is a similar function that uses gpu but its not availbale for jetson, Is there another way to do what I am trying to do? is it possible to get the output of nvinfer directly on the stream without having to go out in a probe and do that replacmenet on cpu?
You need to ensure that the format and resolution of the video produced by the tensor have not changed.
Because both the buffers are all cuda memory, they can be operated directly using cuda. You can refer to the demo app deepstream-imagedata-multistream-cupy.
NO. If you use the C/C++, it will easy to use the cuda API to copy the buffer without the cpu. About the cuda API with Python, you can try to use the CUDA Python.