Facing cuda memory issue

We will try to run our customised deepstream 5.0 app but it give error of cuda memory
Please help us how resolve this issue below is detail of error

   ERROR: nvdsinfer_context_impl.cpp:1448 Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress

0:00:02.558405766 2236 0x7feac801dc00 WARN nvinfer gstnvinfer.cpp:1983:gst_nvinfer_output_loop: error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
0:00:02.558484596 2236 0x7feac801dc00 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1482> [UID = 1]: Tried to release an outputBatchID which is already with the context
Error: gst-stream-error-quark: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR (1): gstnvinfer.cpp(1983): gst_nvinfer_output_loop (): /GstPipeline:pipeline1/GstNvInfer:primary-inference
Exiting app
Cuda failure: status=700 in CreateTextureObj at line 2496
nvbufsurftransform.cpp(2369) : getLastCudaError() CUDA error : Recevied NvBufSurfTransformError_Execution_Error : (400) invalid resource handle.
Cuda failure: status=46
nvbufsurface: Error(-1) in releasing cuda memory
Segmentation fault (core dumped)

1 Like

Were you able to solve this issue?

I have a similar issue that causes my project exit at the same frame number every time

ERROR: nvdsinfer_context_impl.cpp:1572 Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
0:00:23.981264164   506      0x2678720 WARN                 nvinfer gstnvinfer.cpp:2012:gst_nvinfer_output_loop:<primary-inference> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
0:00:23.981319147   506      0x2678720 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1606> [UID = 1]: Tried to release an outputBatchID which is already with the context
Segmentation fault (core dumped)

using 2 models (a classifier and a detector)
Using deepstream 5 and python 3.6

my issue was resolved when I set scaling-filter=1
scaling-compute-hw=1
and made sure batch-size matched everywhere and is the same as the number of sources