Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) dGPU
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) NO
• TensorRT Version 8.0.0.1
**• NVIDIA GPU Driver Version (valid for GPU only)**470.42.01
• Issue Type( questions, new requirements, bugs) bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Only when I run this classifier engine with scaling_filter=2
test.onnx.zip (84.3 MB)a
, I get the error
ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
0:00:06.901790258 66966 0x56431ee9e4f0 WARN nvinfer gstnvinfer.cpp:2325:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
0:00:06.901778908 66966 0x56431ee9e320 WARN nvinfer gstnvinfer.cpp:2325:gst_nvinfer_output_loop:<secondary-nvinference-engine2> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:00:06.901951829 66966 0x56431ee9e050 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<secondary-nvinference-engine3> error: Failed to queue input batch for inferencing
0:00:06.902001356 66966 0x56431ee9e320 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-nvinference-engine2> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1789> [UID = 4]: Tried to release an unknown outputBatchID
0:00:06.902064587 66966 0x56431ee9e4f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: Tried to release an outputBatchID which is already with the context
ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
0:00:06.902386125 66966 0x56431ee9e4f0 WARN nvinfer gstnvinfer.cpp:2325:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
0:00:06.902459484 66966 0x56431ee9e4f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: Tried to release an outputBatchID which is already with the context
I am sharing the onnx file for which this error occurs. Note that this happens with scaling_filter=2, 3,4 and works fine filter=1.
Also, I have tried the onnx-inference and the trtexec --loadEngine
and both work fine.