• Hardware Platform Nvidia Tesla T4 • DeepStream Version 5.1
• TensorRT Version 7.2.2.3 • NVIDIA GPU Driver Version (valid for GPU only) 460.32…03 • Issue Type( questions, new requirements, bugs) question
Hi, in gst-dsexample, i need to get frames for another process.
I already used get_converted_mat API for that but that uses GPU->CPU data transfer and hence for multiple frames, process gets slow.
How can i update get_converted_mat API to get the frame from GPU buffer itself?
after primary gie , there is a custom model(640x480) i need to use and that is being used inside the gst-dsexample.
Inside framemeta_list, iterate and extract each frame and resize to 640x480 from get_converted_mat()
Inference on those image to detect custom class.
attachmeta call.
So , for each frame extracted i am calling get_converted_mat API, that is essentially GPU buffer → CPU . This is taking time when number of frames are more.
So i wanted to use frame data by keeping it inside GPU only.
There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
Sorry for later response, do you get the solution?