Description
I have an onnx model with two outputs, masks with the shape[-1, 3, 256, 256], iou_predictions with the shape[-1, 3]. But I got an error during inference
[06/26/2025-19:46:17] [TRT] [E] IExecutionContext::enqueueV3: Error Code 1: Cuda Runtime (an illegal memory access was encountered)
I noticed that the output of the context.get_tensor_shape('masks') is [1, 3, 256, 256], which should be [13, 3, 256, 256].
I also noticed another error message when I set the intput shape,
[06/26/2025-19:46:17] [TRT] [E] IExecutionContext::setInputShape: Error Code 3: API Usage Error (Parameter check failed, condition: engineDims.d[i] == dims.d[i]. Static dimension mismatch while setting input shape for mask_input. Set dimensions are [13,1,256,256]. Expected dimensions are [1,1,256,256].)
But I’ve already created the optimizationProfile for this input when I built the engine
I have two questions:
1. why the second error occurred.
2. Does the second error cause the first error?
Environment
Linux for Orin.
TensorRT Version: 10.3.0
GPU Type: Orin
Nvidia Driver Version: 540.4.0
CUDA Version: 12.6
Python Version (if applicable): 3.8