Unused input error building TensorRT engine

Description

Hi,
I am trying to accelerate a tensorflow model using TensorRT. I am able to convert the frozen graph to an ONNX file but not able to generate the TensorRT binary. The error “unused input” doesnt make sense since the input is connected to the other nodes as is seen in tensorboard or netron.

A clear and concise description of the bug or issue.

I get the following error/warnings:
2020-05-29 23:07:41.265729: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2988 MB memory) → physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
Cannot infer shape for conv1_1/conv1_1/bn/FusedBatchNormV3: conv1_1/conv1_1/bn/FusedBatchNormV3:5
Cannot infer shape for conv1_2/conv1_2/bn/FusedBatchNormV3: conv1_2/conv1_2/bn/FusedBatchNormV3:5
Cannot infer shape for conv2_1/1/conv2_1/1/bn/FusedBatchNormV3: conv2_1/1/conv2_1/1/bn/FusedBatchNormV3:5
Cannot infer shape for conv2_3/bn/FusedBatchNormV3: conv2_3/bn/FusedBatchNormV3:5
Cannot infer shape for conv2_3/1/conv2_3/1/bn/FusedBatchNormV3: conv2_3/1/conv2_3/1/bn/FusedBatchNormV3:5
Cannot infer shape for conv3_1/bn/FusedBatchNormV3: conv3_1/bn/FusedBatchNormV3:5
Cannot infer shape for conv3_1/1/conv3_1/1/bn/FusedBatchNormV3: conv3_1/1/conv3_1/1/bn/FusedBatchNormV3:5
Cannot infer shape for conv3_3/bn/FusedBatchNormV3: conv3_3/bn/FusedBatchNormV3:5
Cannot infer shape for conv3_3/1/conv3_3/1/bn/FusedBatchNormV3: conv3_3/1/conv3_3/1/bn/FusedBatchNormV3:5
Cannot infer shape for conv4_1/bn/FusedBatchNormV3: conv4_1/bn/FusedBatchNormV3:5
Cannot infer shape for conv4_1/1/conv4_1/1/bn/FusedBatchNormV3: conv4_1/1/conv4_1/1/bn/FusedBatchNormV3:5
Cannot infer shape for conv4_3/bn/FusedBatchNormV3: conv4_3/bn/FusedBatchNormV3:5
Cannot infer shape for conv4_3/1/conv4_3/1/bn/FusedBatchNormV3: conv4_3/1/conv4_3/1/bn/FusedBatchNormV3:5
Cannot infer shape for fc1/fc1/bn/FusedBatchNormV3: fc1/fc1/bn/FusedBatchNormV3:5
Loading ONNX file from path model.onnx…
Beginning ONNX file parsing
[TensorRT] VERBOSE: map/Shape:0:Shape → (4)
Completed parsing of ONNX file
[TensorRT] ERROR: Unused Input: images:0
Traceback (most recent call last):
File “build_engine.py”, line 83, in
main()
File “build_engine.py”, line 73, in main
buf = engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’

Environment

TensorRT Version:: 6.0.1.10 :
GPU Type:: Pascal
Nvidia Driver Version :
CUDA Version: v10.0
CUDNN Version: 7.6.3.28
Operating System + Version: aarch64, Jetson TX2
Python Version (if applicable): 3.6
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.15
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

https://drive.google.com/file/d/1Xa2deqn8v7QMisy8tBQyehXkO70rit_P/view?usp=sharing

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Run the python file as “python build_engine.py” No external dependcies…

Thanks for sharing the script and model file to reproduce the issue. We will look into it and update you accordingly.

Thanks

I have some updates. I used trtexec to convert the onnx file to .engine and got the following log.

&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=model1.onnx --explicitBatch --verbose
[05/01/2020-12:27:44] [I] === Model Options ===
[05/01/2020-12:27:44] [I] Format: ONNX
[05/01/2020-12:27:44] [I] Model: model1.onnx
[05/01/2020-12:27:44] [I] Output:
[05/01/2020-12:27:44] [I] === Build Options ===
[05/01/2020-12:27:44] [I] Max batch: explicit
[05/01/2020-12:27:44] [I] Workspace: 16 MB
[05/01/2020-12:27:44] [I] minTiming: 1
[05/01/2020-12:27:44] [I] avgTiming: 8
[05/01/2020-12:27:44] [I] Precision: FP32
[05/01/2020-12:27:44] [I] Calibration:
[05/01/2020-12:27:44] [I] Safe mode: Disabled
[05/01/2020-12:27:44] [I] Save engine:
[05/01/2020-12:27:44] [I] Load engine:
[05/01/2020-12:27:44] [I] Inputs format: fp32:CHW
[05/01/2020-12:27:44] [I] Outputs format: fp32:CHW
[05/01/2020-12:27:44] [I] Input build shapes: model
[05/01/2020-12:27:44] [I] === System Options ===
[05/01/2020-12:27:44] [I] Device: 0
[05/01/2020-12:27:44] [I] DLACore:
[05/01/2020-12:27:44] [I] Plugins:
[05/01/2020-12:27:44] [I] === Inference Options ===
[05/01/2020-12:27:44] [I] Batch: Explicit
[05/01/2020-12:27:44] [I] Iterations: 10 (200 ms warm up)
[05/01/2020-12:27:44] [I] Duration: 10s
[05/01/2020-12:27:44] [I] Sleep time: 0ms
[05/01/2020-12:27:44] [I] Streams: 1
[05/01/2020-12:27:44] [I] Spin-wait: Disabled
[05/01/2020-12:27:44] [I] Multithreading: Enabled
[05/01/2020-12:27:44] [I] CUDA Graph: Disabled
[05/01/2020-12:27:44] [I] Skip inference: Disabled
[05/01/2020-12:27:44] [I] === Reporting Options ===
[05/01/2020-12:27:44] [I] Verbose: Enabled
[05/01/2020-12:27:44] [I] Averages: 10 inferences
[05/01/2020-12:27:44] [I] Percentile: 99
[05/01/2020-12:27:44] [I] Dump output: Disabled
[05/01/2020-12:27:44] [I] Profile: Disabled
[05/01/2020-12:27:44] [I] Export timing to JSON file:
[05/01/2020-12:27:44] [I] Export profile to JSON file:
[05/01/2020-12:27:44] [I]
[05/01/2020-12:27:44] [V] [TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[05/01/2020-12:27:44] [V] [TRT] Plugin Creator registration succeeded - GridAnchorRect_TRT
[05/01/2020-12:27:44] [V] [TRT] Plugin Creator registration succeeded - NMS_TRT
[05/01/2020-12:27:44] [V] [TRT] Plugin Creator registration succeeded - Reorg_TRT
[05/01/2020-12:27:44] [V] [TRT] Plugin Creator registration succeeded - Region_TRT
[05/01/2020-12:27:44] [V] [TRT] Plugin Creator registration succeeded - Clip_TRT
[05/01/2020-12:27:44] [V] [TRT] Plugin Creator registration succeeded - LReLU_TRT
[05/01/2020-12:27:44] [V] [TRT] Plugin Creator registration succeeded - PriorBox_TRT
[05/01/2020-12:27:44] [V] [TRT] Plugin Creator registration succeeded - Normalize_TRT
[05/01/2020-12:27:44] [V] [TRT] Plugin Creator registration succeeded - RPROI_TRT
[05/01/2020-12:27:44] [V] [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[05/01/2020-12:27:44] [V] [TRT] Plugin Creator registration succeeded - FlattenConcat_TRT

Input filename: model1.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.6.0
Domain:
Model version: 0
Doc string:

WARNING: ONNX model has a newer ir_version (0.0.6) than this parser was built against (0.0.3).
While parsing node number 0 [Cast → “fc1/fc1/bn/Reshape__129:0”]:
— Begin node —
input: “fc1/fc1/bn/Reshape/shape:0”
output: “fc1/fc1/bn/Reshape__129:0”
name: “fc1/fc1/bn/Reshape__129”
op_type: “Cast”
attribute {
name: “to”
i: 7
type: INT
}

— End node —
ERROR: ModelImporter.cpp:296 In function importModel:
[5] Assertion failed: tensors.count(input_name)
[05/01/2020-12:27:45] [E] Failed to parse onnx file
[05/01/2020-12:27:45] [E] Parsing model failed
[05/01/2020-12:27:45] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=model1.onnx --explicitBatch --verbose

Summary

This text will be hidden

I guess this is because cast operation is not supported in Tensorrt. But i guess cast is a very common operator. Is there a plugin already defined for this I can use ?

Could you please try latest TRT 7.1 EA release, model seems to be working fine on latest TRT release?

[06/01/2020-13:57:24] [I] GPU Compute
[06/01/2020-13:57:24] [I] min: 0.551758 ms
[06/01/2020-13:57:24] [I] max: 1.06699 ms
[06/01/2020-13:57:24] [I] mean: 0.57588 ms
[06/01/2020-13:57:24] [I] median: 0.569336 ms
[06/01/2020-13:57:24] [I] percentile: 0.712708 ms at 99%
[06/01/2020-13:57:24] [I] total compute time: 2.92374 s
&&&& PASSED TensorRT.trtexec # ./trtexec --onnx=model1.onnx --explicitBatch --verbose

Thank you will try that!

Did you solve the problem

Hi @rms45
The original model.onnx looks to be an invalid model - it does not run in ONNXRuntime either.
The updated model1.onnx runs both in TRT and ONNXRT - so it looks to be a model issue that has been fixed.

Thanks