Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Orin AGX
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version 8.6.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I am trying to access the frame in function gie_processing_done_buf_prob(). Below is a code snippet I have added to achieve the same. The code is running without any errors, however I found in jtop that the memory usage keeps on increasing. So I feel that there is a memory leak.
// Declare surface buffer and map info for accessing buffer memory
NvBufSurface *in_surf = nullptr;
GstMapInfo in_map_info;
memset(&in_map_info, 0, sizeof(in_map_info)); // Initialize map info structure
// Map the GstBuffer to read data
if (!gst_buffer_map(buf, &in_map_info, GST_MAP_READ)) {
std::cerr << "Error: Failed to map GstBuffer" << std::endl;
return GST_PAD_PROBE_OK;
}
// Assign the mapped buffer data to the surface pointer
in_surf = reinterpret_cast<NvBufSurface *>(in_map_info.data);
if (!in_surf) {
std::cerr << "Error: NvBufSurface is null" << std::endl;
gst_buffer_unmap(buf, &in_map_info);
return GST_PAD_PROBE_OK;
}
// Map the surface to read memory for processing
if (NvBufSurfaceMap(in_surf, -1, -1, NVBUF_MAP_READ) != 0) {
std::cerr << "Error: Failed to map NvBufSurface for read" << std::endl;
return GST_PAD_PROBE_OK;
}
// Sync the surface memory for CPU access
if (NvBufSurfaceSyncForCpu(in_surf, -1, -1) != 0) {
printf("Sync CPU failed for in_surf\n");
}
// Check if the mapped address for surface is valid
if (!in_surf->surfaceList[0].mappedAddr.addr[0]) {
std::cerr << "Error: Mapped address is null for buffer " << std::endl;
}
// Get the surface dimensions (height, width, and pitch)
guint height = in_surf->surfaceList[0].height;
guint width = in_surf->surfaceList[0].width;
guint pitch = in_surf->surfaceList[0].planeParams.pitch[0];
// Create an OpenCV Mat in RGBA format using the surface data
unsigned char *frame_data = (unsigned char *)in_surf->surfaceList[0].mappedAddr.addr[0];
cv::Mat frame = cv::Mat(height, width, CV_8UC4, frame_data, pitch); // CV_8UC4: 8-bit unsigned, 4 channels (RGBA)
// Convert the frame from RGBA to BGR for further processing
cv::Mat in_frame_bgr;
cv::cvtColor(frame, in_frame_bgr, cv::COLOR_RGBA2BGR);
in_frame_bgr.convertTo(in_frame_bgr, CV_32FC3); // Convert to float
// Unmap the surface after reading
if (NvBufSurfaceUnMap(in_surf, -1, -1) != 0) {
std::cerr << "Error: Failed to unmap NvBufSurface" << std::endl;
return GST_PAD_PROBE_OK;
}
I have similar code in nvdspreprocess CustomTensorPreparation() and I do not see memory leak in preprocess. If I comment the above code in gie_processing_done_buf_prob() then the memory usage doesn’t increase.
Thanks