ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) dGPU
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) NO
• TensorRT Version 8.0.0.1
**• NVIDIA GPU Driver Version (valid for GPU only)**470.42.01
• Issue Type( questions, new requirements, bugs) bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Only when I run this classifier engine with scaling_filter=2
test.onnx.zip (84.3 MB)a
, I get the error

ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
0:00:06.901790258 66966 0x56431ee9e4f0 WARN                 nvinfer gstnvinfer.cpp:2325:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
0:00:06.901778908 66966 0x56431ee9e320 WARN                 nvinfer gstnvinfer.cpp:2325:gst_nvinfer_output_loop:<secondary-nvinference-engine2> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:00:06.901951829 66966 0x56431ee9e050 WARN                 nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<secondary-nvinference-engine3> error: Failed to queue input batch for inferencing
0:00:06.902001356 66966 0x56431ee9e320 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-nvinference-engine2> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1789> [UID = 4]: Tried to release an unknown outputBatchID
0:00:06.902064587 66966 0x56431ee9e4f0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: Tried to release an outputBatchID which is already with the context
ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
0:00:06.902386125 66966 0x56431ee9e4f0 WARN                 nvinfer gstnvinfer.cpp:2325:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
0:00:06.902459484 66966 0x56431ee9e4f0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: Tried to release an outputBatchID which is already with the context

I am sharing the onnx file for which this error occurs. Note that this happens with scaling_filter=2, 3,4 and works fine filter=1.

Also, I have tried the onnx-inference and the trtexec --loadEngine and both work fine.

Can you share the steps to reproduce this error? I’m not sure if option scaling_filter is coming from DeepStream.


Let me try to create a reproducible example.

Here is the scaling-filter doc

When I add this scaling_filter=2 to apps/sample_apps/deepstream-test2’s SGIE config files, I don’t get the above error. But for some reason, my pipeline suffers from this.

It seems to be a bug in the legacy version.
1.Can you update to the latest version?
2.Try adding scaling-compute-hw=2 in nvinfer’s configuration file.
The sampling algorithm with scaling_filter=2 only works on VIC modules and cannot be used on GPUs.

Hey @junshengy
I am working on dGPU. If I try the scaling-compute-hw=2, I get error

Error. Invalid value for 'scaling-compute-hw':'2'
Failed to parse group property

There has to be a way to use “Bilinear” interpolation with dGPU, isn’t it?

dGPU only supports bilinear, so the following parameters should work.

scaling-compute-hw=1
scaling_filter=1


Ok. Got it, Also, as per this table, is the “Nearest” method the default one or only used when scaling-filter=0?

Yes, It’s true.

But I don’t get this error on using DS7. Also, using scaling-filter=2 gives me numbers closer to onnx counterpart.