TensorRT Quick Start Guide Example is not running (JetPack 4.2.2)

I have an issue which i wasn’t able to solve with posts I found so far.

I am using a Jetson AGX Xavier and want to run real time inference with TensorRT. Because of project internal dependecies I am forced to use JetPack 4.2.2.

With that I am running theses Versions:

  • onnx v1.10.2
  • TensorRT v5.1.6.1
  • tensorflow-gpu v1.14.0
  • CUDA v10.0
  • cuDNN v7

To be certain that my model is not at fault, I am using the imagenet ResNet50 model.

1 - Quick Start Guide

I tried to follow the example notebook from the NVIDIA Quick Start Guide.

I am getting the error which I also got with my own model when I run
trtexec --onnx=resnet50_onnx_model.onnx --saveEngine=resnet_engine.trt

&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=resnet50_onnx_model.onnx --saveEngine=resnet_engine.trt
[I] onnx: resnet50_onnx_model.onnx
[I] saveEngine: resnet_engine.trt
#----------------------------------------------------------------
Input filename: resnet50_onnx_model.onnx
ONNX IR version: 0.0.7
Opset version: 13
Producer name: keras2onnx
Producer version: 1.8.1
Domain: onnxmltools
Model version: 0
Doc string:
#----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.7) than this parser was built against (0.0.3).
While parsing node number 1 [Conv]:
ERROR: ModelImporter.cpp:288 In function importModel:
[5] Assertion failed: tensors.count(input_name)
[E] failed to parse onnx file
[E] Engine could not be created
[E] Engine could not be created
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=resnet50_onnx_model.onnx --saveEngine=resnet_engine.trt

I have tried checking the model without getting a return

onnx.checker.check_model(onnx_model)

I might need to add, that --explicitBatch is not available with TensorRT 5.x and that i using keras2onnx to convert the model (2 things that differ from the notebook).

2 - TensorRT Documentation

Furthermore I found another approach in the TensorRT documentaion:

import tensorrt as trt

TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
ONNX_MODEL = "resnet50_onnx_model.onnx"

def build_engine():
    with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, \
        trt.OnnxParser(network, TRT_LOGGER) as parser:
        # Configure the builder here.
        builder.max_workspace_size = 2**30
        # In this example, we use the ONNX parser, but this should be replaced
        # according to your needs. This step might instead use the Caffe/UFF parser,
        # or even the Network API to build a TensorRT Network manually .
        with open(ONNX_MODEL, 'rb') as model:
            parser.parse(model.read())
        #-------------------------Added-----------------------------
        last_layer = network.get_layer(network.num_layers - 1)
        # Check if last layer recognizes it's output
        if not last_layer.get_output(0):
             # If not, then mark the output using TensorRT API
             network.mark_output(last_layer.get_output(0))
        #-----------------------------------------------------------

        # Build and return the engine. Note that the builder,
        # network and parser are destroyed when this function returns.
        return builder.build_cuda_engine(network)

with the ResNet50 Model I get the ERROR:

[TensorRT] ERROR: Network must have at least one output

even though I marked the output as described here.
With my own model I am then getting the Error:

python3: …/builder/Network.cpp:863: virtual nvinfer1::ILayer* nvinfer1::Network::getLayer(int) const: Assertion `layerIndex >= 0’ failed.
Aborted

Other suggestions are to Update TensorRT but that would mean updating JetPack as far as I understood.

Now…can someone help me with the issue? I am quite new to this and would appreciate any help or point out obvious mistakes. I would suspect this is beacuse of some versioning Issues but JetPack 4.2.2 ins’t that old so I imagine it must work somehow.

Thanks a lot!

Hi,

Is your model working with ONNX runtime?
Based on the error, it seems that it doesn’t have an output layer.

You can visualize it with the below website:
https://netron.app/

More, it’s still recommended to upgrade to the latest TensorRT version.
What kind of dependency your project is facing?

Thanks

Hi!

Thanks for your answer.

  1. I haven’t tried using ONNX runtime so far. I will try that next and will update you on the outcome.

  2. I have visualized it. Indeed, with my model there seems to be something wrong, I will look into it. The ResNet50 model however, looks good to me. Inputs and outputs are being recognized.

As I am am follwing the instructions from the quick start guide, I would assume that at least the model and the conversion to onnx should have been done correctly.

  1. A colleague is biulding applications for the ‘robot’ the Jetson is placed on. This project is running for a while now and the risk of updating at this phase would be too great. I am not aware of the dependecies in detail though.

Just an update from my side. My model works fine with ONNX Runtime.

Are there any disadvantages using ONNX Runtime over TensorRT?

I wasn’t able to install onnxruntime-gpu as explained here so far tough (the docker image apparently is only applicable to JetPack>=4.4).
I’m now using "CUDAExecutionProvider" as provider when calling ort.InferenceSession() but I’m not sure about the differences.

Corretion: I must have overseen the error that "CUDAExecutionProvider" is not available.
Of courese I would like to utilize my GPU.

I managed to install onnxruntime-gpu v1.4.0, however, I need v1.1.2 for compability with CUDA v10.0 from what I found so far in my research.
I have troubles finding a way to install this version though if it is even available.

Hi,

Since JetPack 4.2.2 is released quite long ago, it will be good if you can upgrade the environment to the latest.

We don’t provide onnxruntime package for CUDA 10.0 environment.
Maybe you can try to build it from the source:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.