Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)
**• DeepStream Version 6.2
**• JetPack Version (valid for Jetson only) 5.1.1
I am traying to run an engine model that modifies the video itself and the output tensor is rgb frames , but the output comes out in the tensor metadata and then i have to take the frame from the tensor meta and paste it in the
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
I know there is a similar function that uses gpu but its not availbale for jetson, Is there another way to do what I am trying to do? is it possible to get the output of nvinfer directly on the stream without having to go out in a probe and do that replacmenet on cpu?
Thank you for your help
Could you attach your whole pipeline and describe your use case in detail?
filesrc → decodebin → nvvideoconvert → caps → nvinfer → queue ->nv3dsink
The issue is that i would like to use the model as an image enhancer , so i have to exchange the original image with the output image of my model.
is this the best way to do it? through deepstream , or should i look for other ways?
You need to ensure that the format and resolution of the video produced by the tensor have not changed.
Because both the buffers are all cuda memory, they can be operated directly using cuda. You can refer to the demo app deepstream-imagedata-multistream-cupy.
I know about this example and I cant use it
As I mentioned in my original post I work with Jetson not x86, so is there a way in deepstream or shouid i consider another approach? Thanks
NO. If you use the C/C++, it will easy to use the cuda API to copy the buffer without the cpu. About the cuda API with Python, you can try to use the CUDA Python.