Nvinfer cuda error

I got this error after running my hrnet_onnx file. First, I can build my engine file successfully and run it. But sometimes I got cuda error and my program crashed. Can anyone give me some advice?

ERROR: nvdsinfer_context_impl.cpp:1573 Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
0:02:38.125219050 345 0x281cb70 WARN nvinfer gstnvinfer.cpp:2021:gst_nvinfer_output_loop: error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
0:02:38.125259414 345 0x281cb70 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1607> [UID = 1]: Tried to release an outputBatchID which is already with the context
Cuda failure: status=700 in CreateTextureObj at line 2902
nvbufsurftransform.cpp:2703: => Transformation Failed -2

Error: gst-stream-error-quark: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR (1): gstnvinfer.cpp(2021): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:pgie
[ERROR] 2021-09-14 09:41:07 Exiting the Stream worker thread failed with exception: VPI_ERROR_INTERNAL: (cudaErrorIllegalAddress)
[WARN ] 2021-09-14 09:41:07 (cudaErrorIllegalAddress)
[WARN ] 2021-09-14 09:41:07 (cudaErrorIllegalAddress)

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU) 2080 ti
**• DeepStream Version ** 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2.2.1
• NVIDIA GPU Driver Version (valid for GPU only) 460.91
• Issue Type( questions, new requirements, bugs) Cuda Error
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Sorry for the late response, is this still an issue to support?

Thanks

Sorry for the later response. Can you have a try with TRT test:

/usr/src/tensorrt/bin/trtexec