ONNX -> TRT Error Code 1 and 2: Cask (isConsistent) and Internal Error (Assertion enginePtr != nullptr failed.)


I’m trying to deploy ONNX model using TensorRT, but errors occours.

Used command:
./trtexec --onnx=/home/nvidia/trt_conversion/build-yolo-tensorrt/model/tfmodel_lin_19_batchsize.onnx --saveEngine=test.trt --verbose

The model was converted to ONNX from ‘SavedModel’ format on target Device and check with onnx.checker.check_model(model). do not return any error.

I tried conversion with and without set an explicit batch size in the ONNX file.

The attempt was made also with onnx==1.4.1.
The attempts were also made when convert model on host machine with different opset (9, 11, 12).


JP Version: 4.6.0
TensorRT Version:
Device Type: Jetson Xavier
Nvidia Driver Version:
CUDA Version: 10.2.460-1
Python Version (if applicable): 3.6.9
Tensorflow Version (if applicable):tensorflow==2.6.2+nv21.12

Relevant Files

Link to ONNX model (without set an explicit batch size- the error is the same for both cases) - Sign in to your account

Steps To Reproduce

Please include:
Commands in description.

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet


import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

Added putty.log with --verbose: putty.log (305.8 KB)
Check_model don’t return anny errors.
Link to ONNX file (the same as in description)


We could not reproduce the same error on the latest TensorRT version 8.4 GA.
We recommend you to please use the latest TensorRT version.

&&&& PASSED TensorRT.trtexec [TensorRT v8401] # /opt/tensorrt/bin/trtexec --onnx=tfmodel_lin_19.onnx --verbose

As you’re using Jetson Xavier, we recommend you to please reach out Jetson Xavier forum if you face any issues in upgrading the TRT version.

Thank you.


Thanks for the advice, before upgrade I tried with adding input shape flag and explicit batch --shapes=input_1:1x416x416x3 --explicitBatch which solved the issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.