Convert onnx to trt error on jetson xavier nv


A clear and concise description of the bug or issue.


TensorRT Version: 7.1.3
GPU Type: Tegra PCIe x4/x8 Endpoint/Root Complex
Nvidia Driver Version: nvidia-l4t-3d-core 32.4.4
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: Ubuntu 18.04.5 LTS
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.0
Baremetal or Container (if container which image + tag):

Relevant Files

onnx model information:

ONNX IR version: 0.0.3
Opset version: 9
Producer name: pytorch
Producer version: 0.4
Model version: 0
Doc string:

error information:

ERROR: …/builder/cudnnBuilderGraphShapeAnalyzer.cpp (2467) - Assertion Error in updateExtent: 0 (layer validation and shape analyzer disagree about dimensions)

Assertion `engine’ failed.

Steps To Reproduce

this is my deploy code:

        nvinfer1::IBuilder* builder = nvinfer1::createInferBuilder(iLogger_);

        // nvinfer1::INetworkDefinition* network = builder->createNetwork();
        const auto explicitBatch = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
        nvinfer1::INetworkDefinition* network = builder->createNetworkV2(explicitBatch);

        LOG(INFO)<<"Begin parsing "<<<<" model from "<<model_file_;
        auto parser = nvonnxparser::createParser(*network, iLogger_);
        // parser = nvonnxparser::createParser(*network, iLogger_);
        int verbosity = static_cast<int>(nvinfer1::ILogger::Severity::kWARNING);
        if (!parser->parseFromFile(model_file_.c_str(), verbosity))
            // std::cout<<"Failed to parse onnx file"<<std::endl;
            LOG(ERROR)<<"Failed to parse model_file from "<<model_file_<<"!";
        LOG(INFO)<<"End parsing "<<<<" model from "<<model_file_<<".";

        builder->setMaxWorkspaceSize(1 << 15);

        nvinfer1::ICudaEngine* engine = builder->buildCudaEngine(*network);

when i run it , it would crash.i think maybe pytorch version is too low.

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

Thank you,i check my model definition, there is a problem with my maxpooling op