_M_range_check exception encountered in `ICudaEngine::createExecutionContext()`

Description

Loading an ONNX model (attached) via the C++ API, triggers the exception upon calling ICudaEngine::createExecutionContext():

[E] [TRT] 1: Unexpected exception vector<bool>::_M_range_check: __n (which is 0) >= this->size() (which is 0)

This is also reproducible using the released sampleOnnxMNIST code. I am attaching both the ONNX file and the code file to reproduce it here.

Interestingly, trtexec --onnx=palm.onnx can load the model just fine, so it seems that there’s a way to get this working via the C++ API, but I’m unable to pinpoint what it is.

Environment

Use the docker container per instructions from the TensorRT repository.

TensorRT Version: 8.6.1
GPU Type: NVIDIA GeForce RTX 3090
Nvidia Driver Version: 535.54.03
CUDA Version: 12.0 (but also reproducible on 11.6)
CUDNN Version: 8.8
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): N/A
TensorFlow Version (if applicable): N/A
PyTorch Version (if applicable): N/A
Baremetal or Container (if container which image + tag):

Relevant Files

ONNX Model:
palm.onnx (3.9 MB)

Source code file:
sampleOnnxMNIST.cpp (12.4 KB)

Steps To Reproduce

  • Set up & launch the docker container build environment for TensorRT samples per instructions.
  • Replace the attached code file sampleOnnxMNIST.cpp
  • Compile (instructions) and run the executable located at /workspace/TensorRT/build/out/sample_onnx_mnist
  • Exception is raised:
    Creating execution context
    [09/01/2023-06:58:59] [E] [TRT] 1: Unexpected exception vector<bool>::_M_range_check: __n (which is 0) >= this->size() (which is 0)
    Created execution context 
    &&&& FAILED TensorRT.sample_onnx_mnist [TensorRT v8601] # ./sample_onnx_mnist
    

Hi,
Please refer to the below link for Sample guide.

Refer to the installation steps from the link if in case you are missing on anything

However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.

In order to run python sample, make sure TRT python packages are installed while using NGC container.
/opt/tensorrt/python/python_setup.sh

In case, if you are trying to run custom model, please share your model and script with us, so that we can assist you better.
Thanks!

Hi, yes I have shared the custom model and script in the original post.

Could you please file a bug at Issues · NVIDIA/TensorRT · GitHub with issue repro model for better help.

Thank you.