Converting Custom ONNX model to TensorRT engine

Hello TensorRT team, I’m a huge advocate and fan of your product! I am reaching out due to trouble converting my custom ONNX model to a TensorRT engine.

When I create my TensorRT engine from my ONNX model, I am unable to inference it successfully. As I run into the following error:
[TRT] [E] IExecutionContext::enqueueV3: Error Code 1: Cask (Cask Pooling Runner Execute Failure)

I ran trtexec --verbose to see if I could identify any related issues. Since the engine was created successfully, I thought I was good to go. However, when running trtexec --verbose I noticed the following two logs:

RunnerBuilder of layer implementation CaskJitConv cannot handle striding for node /model_in_feature/model_in_feature.0/Conv

Skipping CaskFlattenConvolution: No valid tactics for /model_in_feature/model_in_feature.0/Conv

I’ve attached the trtexec --verbose logs as reference.

In terms of solutions, I’ve tried to ensure my TensorRT version is compatible with my CUDA version, in addition to changing the opset of the ONNX model to various versions (11, 14, and 18), and none of the following solutions have worked so far. I also leveraged Netron to observe the layer “/model_in_feature/model_in_feature.0/Conv” and saw that the configurations for this layer (stride, padding, kernel shape, etc.) were all normal, with the following values: strides (1, 1), padding (0, 0, 0, 0), kernel shape (1, 1), dilations (1, 1), and groups 1. I would like to have a clear solution going forward before providing any ONNX model.

I have created the TensorRT engine in multiple ways such as the trtexec API command, and through the TensorRT API, all resulting in the same error.

Lastly, I have seen in prior discussions related to converting custom models to TensorRT that advisors will first instruct users to run trtexec --verbose to see the errors, as I already have done.
Environment
• TensorRT Version: 10.3.0
• GPU Type: NVIDIA Tesla V100
• Nvidia Driver Version: 550.12
• CUDA Version: 12.5
• CUDNN Version: 9.2.1.18
• Operating System + Version: Ubuntu 22.04
• Python Version: Python 3.9.20
• TensorFlow Version: N/A
• PyTorch Version: 2.2.0
• Baremetal or Container: Container

trtexec.txt (1.6 MB)

Hi @cdomond ,
Can you pls share with us teh onnx model with us?

Thanks