TensorRt Error Network must have at least one output , using onnx model in Jetson nano

I converted Onnx model using torch.onnx.export torch version 1.4.0 and torchvision 0.5.0, In Jetson nano with jetpack 4.3 Tensorrt 6.0.1.10, I get above mentioned error,
https://drive.google.com/open?id=15pRSfjf74ogsnW8iu1xtKinr2pW908Gj help me to solve this issue.

Hi,

This specific issue is arising because the ONNX Parser isn’t currently compatible with the ONNX models exported from Pytorch >= 1.3 - If you downgrade to Pytorch 1.2, this issue should go away.

You can use TRT 7, it supports pytorch 1.3.
Please refer below link for more details:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-release-notes/tensorrt-7.html#tensorrt-7

Thanks

Is TensorRt 7.0 available for jetson nano?

No, not yet.
Latest Jetpack version 4.3 supports TRT 6.0.
https://developer.nvidia.com/embedded/jetpack

Thanks

Hi,

I converted onnx model using torch 1.2 with dynamic batch size. When I ran it in jetson nano with jetpack 4.3 tensorrt 6.0.1

with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
            with open(model_path, 'rb') as model:
                config = builder.create_builder_config()
                profile = builder.create_optimization_profile();
                profile.set_shape("input", (1,3, 128, 64), (10,3, 128, 64), (15, 3, 128, 64))
                config.add_optimization_profile(profile)
                sd = parser.parse(model.read())
                self.engine = builder.build_engine(network,config)
                print("sd = ",sd)

I get the following error,

[TensorRT] VERBOSE: 196:Relu -> (512, 8, 4)
[TensorRT] VERBOSE: 198:AveragePool -> (512, 1, 1)
[TensorRT] VERBOSE: 206:Reshape -> (512)
[TensorRT] VERBOSE: 207:ReduceL2 -> (1)
[TensorRT] VERBOSE: output:Div -> (512)
[TensorRT] ERROR: Minimum dimensions in profile 0 for static input "input" are [1,3,128,64], expected [3,128,64]
sd =  True

onnx model https://drive.google.com/open?id=1y2SR5Mo62hRrIfczJQpXThyCfyK72Z-m

Same code works with TRT 7, but TRT 7 is not supported in Jetson nano. :(

How to solve this issue?

Hi,

Can you try converting you model to static input shapes and try to convert the model?

You can also use “trtexec” command line tool to understand performance and possibly locate bottlenecks.

Please find the below links for your reference:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#trtexec
https://github.com/NVIDIA/TensorRT/blob/release/6.0/samples/opensource/trtexec/README.md

Thanks

Hi,

  • First I tried converting onnx model with static input(batch size) as (1,3,128,64) to static TRT engine, I can successfully convert the model to TRT engine.
  • Second I tried converting onnx model with dynamic input(batch size) as (?,3,128,64) to dynamic TRT engine, I cannot able to convert to TRT engine, this is the issue mentioned above
  • Third I tried converting onnx model with static input(batch size) as (1,3,128,64) to dynamic TRT engine, I get following error,
  • [TensorRT] INTERNAL ERROR: Assertion failed: dims.nbdims == nbDims
    Aborting...
    [TensorRT] ERROR: ../builder/shapeCompiler.cpp (738) - Assertion Error in setInputDimensions:0 (dims.nbDims == nbDims)
    

    Help me to make this model as dynamic shape TRT engine.

    Any update??

    Hi,

    Your new model is also seems to be using pytorch 1.3. TRT 6 doesn’t support pytorch 1.3 models. Also, TRT 7 has better dynamic model support.
    dtrk11.onnx - “producer - pytorch 1.3”

    Also, current onnx model is using hard coded batch_size value from dummy_input.
    Please refer below link:
    https://github.com/onnx/onnx/issues/654#issuecomment-410538671

    Thanks

    But TRT 7 is not available for jetson nano.

    dtrk11.onnx is converted using pytroch 1.2(which supports Tensort 6) with dynamic batch size. I have attached model snapshot.

    input shape is [batch_sizex3x128x64].

    Hi,

    But it seems the actual model is produced using pytorch 1.3. You can check that by clicking the input element, “producer” element in details.

    Can you try to create a pytorch model using pt v1.2 and then convert it to ONNX model?

    Also, try to upgrade onnx opset version to opset 10 or higher and rerun the script.

    Thanks

    Hi,
    Thanks for your inputs, but it didn’t work.

    New model link with pytorch 1.2 and onnx opset version 10.
    https://drive.google.com/open?id=1bCUgNuQ9X0tBOe20cT8PxXt4SOQ4J04o
    I get the same error,

    [TensorRT] VERBOSE: 196:Relu -> (512, 8, 4)
    [TensorRT] VERBOSE: 198:AveragePool -> (512, 1, 1)
    [TensorRT] VERBOSE: 206:Reshape -> (512)
    [TensorRT] VERBOSE: 207:ReduceL2 -> (1)
    [TensorRT] VERBOSE: output:Div -> (512)
    [TensorRT] ERROR: Minimum dimensions in profile 0 for static input "input" are [1,3,128,64], expected [3,128,64]
    sd =  True
    

    Hi,

    In this case, you can try using implicit mode:

    trtexec --onnx=dynamic_dtrk.onnx --verbose --batch=x
    

    It seems to be working in TRT 6. Setting max batch size will allow you to run 1~batch
    But only max batch size is guaranteed to have best performance.

    Thanks

    Hi @SunilJB, I had a similar error shown, when I tried converting a onnx model (created from pytorch == 1.1) into tensorrt engine with tensorrt (on tx2 7.1.0.16).

    [TensorRT] ERROR: Network must have at least one output
    [TensorRT] ERROR: Network validation failed.

    After seeing this post, I created a new environment with Pytorch == 1.4 and did torch.onn.export to create a new onnx model. When I tried with the conversion with a new onnx model I’m getting the same old error.

    How to solve this error?

    Thanks in advance.