Deepstream Cuda Illegal memory Address

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) dGPU
• DeepStream Version 6.0.1
• TensorRT Version8.0.1
**• NVIDIA GPU Driver Version (valid for GPU only)**510.47.03

I have been recently developing a face-blurring pipeline, and have had success. testing basic videos works fine.

The pipeline is developed and run inside the ‘6.0.1-triton’ container from nvidia, using python bindings.

though when testing a different sample video, one with many detectable faces, i recieve this error a couple of seconds into the video;

[ ERROR: CUDA Runtime ] an illegal memory access was encountered
[WARN ] 2022-08-07 04:28:31 (cudaErrorIllegalAddress)
GPUassert: an illegal memory access was encountered src/modules/NvMultiObjectTracker/context.cpp 197
ERROR: [TRT]: 1: [convolutionRunner.cpp::checkCaskExecError::440] Error Code 1: Cask (Cask Convolution execution)
ERROR: [TRT]: 1: [apiCheck.cpp::apiCatchCudaError::17] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: nvdsinfer_backend.cpp:310 Failed to enqueue inference batch
ERROR: nvdsinfer_context_impl.cpp:1643 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:03.660359605 2431 0x3583980 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
[WARN ] 2022-08-07 04:28:31
terminate called without an active exception
ERROR: nvdsinfer_context_impl.cpp:341 Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: nvdsinfer_context_impl.cpp:1619 Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:00:03.660521480 2431 0x3583980 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
Aborted (core dumped)

Using nvidia-smi while running the application shows that on at most, 2640MiB / 6144MiB gpu memory is used.

The pipeline is running using nvbuf-memory-type = NVBUF_MEM_CUDA_UNIFIED on all ‘nvvideoconvert’ components.

Any help with this would be appreciated

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

  1. what is your GPU model?
  2. what is your media pipeline? which deepstream sample is your developing based on?
  3. please provide simplified code to reproduce this issue, including configuration file, video.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.