[Suspicious code] It seems may cause memory leak in gstnvinfer.cpp from nvcr.io/nvidia/deepstream:6.3-samples

• DeepStream Version: 6.3
• Issue Type: question

Hi,
I got DeepStream 6.3 source code from docker nvcr.io/nvidia/deepstream:6.3-samples.
And I found gstnvinfer.cpp use batch.release() to pass GstNvInferBatch* to convert_batch_and_push_to_input_thread ().
But convert_batch_and_push_to_input_thread() does not delete “batch” before return FALSE.
Will it cause memory leak?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Dear Fanzh,

Thanks for your response.
The source code is from /opt/nvidia/deepstream/deepstream-6.3 in docker nvcr.io/nvidia/deepstream:6.3-samples.
I think the “code logic” may cause memory leak.
And it does not related to any hardware and other software version.
Caller is expected to delete object by itself after call std::unique_ptr<T,Deleter>::release.
I am okay to close this post if it is not an issue. Thanks :)

nvinfer plugin is opensource, can you confirm it? and how to fix?

Dear Fanzh,

I do not know how to submit fix to gst-nvinfer plugin.
And I think the solution is “delete batch;” before “return FALSE;”. Thanks

static gboolean
convert_batch_and_push_to_input_thread (GstNvInfer *nvinfer,
    GstNvInferBatch *batch, GstNvInferMemory *mem)
{
  NvBufSurfTransform_Error err = NvBufSurfTransformError_Success;
  std::string nvtx_str;
  cudaError_t cudaReturn;

  cudaReturn = cudaSetDevice (nvinfer->gpu_id);
  if (cudaReturn != cudaSuccess) {
    GST_ELEMENT_ERROR (nvinfer, RESOURCE, FAILED,
        ("Failed to set cuda device %d", nvinfer->gpu_id),
        ("cudaSetDevice failed with error %s", cudaGetErrorName (cudaReturn)));
  }

  /* Set the transform session parameters for the conversions executed in this
   * thread. */
  err = NvBufSurfTransformSetSessionParams (&nvinfer->transform_config_params);
  if (err != NvBufSurfTransformError_Success) {
    GST_ELEMENT_ERROR (nvinfer, STREAM, FAILED,
        ("NvBufSurfTransformSetSessionParams failed with error %d", err), (NULL));
    delete batch;
    return FALSE;
  }

  nvtxEventAttributes_t eventAttrib = {0};
  eventAttrib.version = NVTX_VERSION;
  eventAttrib.size = NVTX_EVENT_ATTRIB_STRUCT_SIZE;
  eventAttrib.colorType = NVTX_COLOR_ARGB;
  eventAttrib.color = NVTX_DEEPBLUE_COLOR;
  eventAttrib.messageType = NVTX_MESSAGE_TYPE_ASCII;
  nvtx_str = "convert_buf batch_num=" + std::to_string(nvinfer->current_batch_num);
  eventAttrib.message.ascii = nvtx_str.c_str();

  nvtxDomainRangePushEx(nvinfer->nvtx_domain, &eventAttrib);

  if (batch->frames.size() > 0) {
    /* Batched tranformation. */
    err = NvBufSurfTransformAsync (&nvinfer->tmp_surf, mem->surf,
              &nvinfer->transform_params, &batch->sync_obj);
  }

  nvtxDomainRangePop (nvinfer->nvtx_domain);

  if (err != NvBufSurfTransformError_Success) {
    GST_ELEMENT_ERROR (nvinfer, STREAM, FAILED,
        ("NvBufSurfTransform failed with error %d while converting buffer", err),
        (NULL));
    delete batch;
    return FALSE;
  }

  LockGMutex locker (nvinfer->process_lock);
  /* Push the batch info structure in the processing queue and notify the output
   * thread that a new batch has been queued. */
  g_queue_push_tail (nvinfer->input_queue, batch);
  g_cond_broadcast (&nvinfer->process_cond);

  return TRUE;
}

Thanks for the sharing, we will check.

Thanks for sharing! it will not cause memory leak. it is a smart pointer, you don’t need to care about memory release.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.