Tensorrt conversion for onnx model with tow output arrays

Description

Following the example shown at TensorRT/4. Using PyTorch through ONNX.ipynb at master · NVIDIA/TensorRT · GitHub

I wonder how can I create the bindings for a model that has more than one output array.

I will really appreciate clear suggestions.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi,

Thanks for your feedback.

I get error when I try converting using trtexec.

The trtexec ouput is here

&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=resnet18_2_outputs.onnx --saveEngine=resnet_engine_pytorch.trt --explicitBatch --inputIOFormats=fp16:chw --outputIOFormats=fp16:chw --fp16
[06/24/2021-11:46:09] [I] === Model Options ===
[06/24/2021-11:46:09] [I] Format: ONNX
[06/24/2021-11:46:09] [I] Model: resnet18_2_outputs.onnx
[06/24/2021-11:46:09] [I] Output:
[06/24/2021-11:46:09] [I] === Build Options ===
[06/24/2021-11:46:09] [I] Max batch: explicit
[06/24/2021-11:46:09] [I] Workspace: 16 MB
[06/24/2021-11:46:09] [I] minTiming: 1
[06/24/2021-11:46:09] [I] avgTiming: 8
[06/24/2021-11:46:09] [I] Precision: FP32+FP16
**[06/24/2021-11:46:09] [I] Calibration: **
[06/24/2021-11:46:09] [I] Safe mode: Disabled
[06/24/2021-11:46:09] [I] Save engine: resnet_engine_pytorch.trt
**[06/24/2021-11:46:09] [I] Load engine: **
[06/24/2021-11:46:09] [I] Builder Cache: Enabled
[06/24/2021-11:46:09] [I] NVTX verbosity: 0
[06/24/2021-11:46:09] [I] Input: fp16:chw
[06/24/2021-11:46:09] [I] Output: fp16:chw
[06/24/2021-11:46:09] [I] Input build shapes: model
[06/24/2021-11:46:09] [I] Input calibration shapes: model
[06/24/2021-11:46:09] [I] === System Options ===
[06/24/2021-11:46:09] [I] Device: 0
**[06/24/2021-11:46:09] [I] DLACore: **
[06/24/2021-11:46:09] [I] Plugins:
[06/24/2021-11:46:09] [I] === Inference Options ===
[06/24/2021-11:46:09] [I] Batch: Explicit
[06/24/2021-11:46:09] [I] Input inference shapes: model
[06/24/2021-11:46:09] [I] Iterations: 10
[06/24/2021-11:46:09] [I] Duration: 3s (+ 200ms warm up)
[06/24/2021-11:46:09] [I] Sleep time: 0ms
[06/24/2021-11:46:09] [I] Streams: 1
[06/24/2021-11:46:09] [I] ExposeDMA: Disabled
[06/24/2021-11:46:09] [I] Spin-wait: Disabled
[06/24/2021-11:46:09] [I] Multithreading: Disabled
[06/24/2021-11:46:09] [I] CUDA Graph: Disabled
[06/24/2021-11:46:09] [I] Skip inference: Disabled
[06/24/2021-11:46:09] [I] Inputs:
[06/24/2021-11:46:09] [I] === Reporting Options ===
[06/24/2021-11:46:09] [I] Verbose: Disabled
[06/24/2021-11:46:09] [I] Averages: 10 inferences
[06/24/2021-11:46:09] [I] Percentile: 99
[06/24/2021-11:46:09] [I] Dump output: Disabled
[06/24/2021-11:46:09] [I] Profile: Disabled
**[06/24/2021-11:46:09] [I] Export timing to JSON file: **
**[06/24/2021-11:46:09] [I] Export output to JSON file: **
**[06/24/2021-11:46:09] [I] Export profile to JSON file: **
**[06/24/2021-11:46:09] [I] **
----------------------------------------------------------------
Input filename: resnet18_2_outputs.onnx
ONNX IR version: 0.0.6
Opset version: 9
Producer name: pytorch
Producer version: 1.6
**Domain: **
Model version: 0
Doc string: **
----------------------------------------------------------------
terminate called after throwing an instance of ‘std::invalid_argument’
** what(): The number of outputIOFormats must match network’s outputs.


Please find my onnx file here resnet18_2_outputs.onnx - Google Drive

Thanks

Hi @source821,

We recommend you to try on latest TensorRT version. We are able to generate engine successfully when we run on TensorRT latest version 8.0 EA.

&&&& PASSED TensorRT.trtexec [TensorRT v8000] # trtexec --onnx=/my_data/181682_arrays/model.onnx --saveEngine=resnet_engine_pytorch.trt --explicitBatch --inputIOFormats=fp16:chw --outputIOFormats=fp16:chw --fp16 --verbose
[06/24/2021-16:59:58] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 2339, GPU 14450 (MiB)

Thank you.

Thanks I will give it a try.