Hi,
I am trying to accelerate a tensorflow model using TensorRT. I am able to convert the frozen graph to an ONNX file but not able to generate the TensorRT binary. The error “unused input” doesnt make sense since the input is connected to the other nodes as is seen in tensorboard or netron.
A clear and concise description of the bug or issue.
I get the following error/warnings:
2020-05-29 23:07:41.265729: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2988 MB memory) → physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
Cannot infer shape for conv1_1/conv1_1/bn/FusedBatchNormV3: conv1_1/conv1_1/bn/FusedBatchNormV3:5
Cannot infer shape for conv1_2/conv1_2/bn/FusedBatchNormV3: conv1_2/conv1_2/bn/FusedBatchNormV3:5
Cannot infer shape for conv2_1/1/conv2_1/1/bn/FusedBatchNormV3: conv2_1/1/conv2_1/1/bn/FusedBatchNormV3:5
Cannot infer shape for conv2_3/bn/FusedBatchNormV3: conv2_3/bn/FusedBatchNormV3:5
Cannot infer shape for conv2_3/1/conv2_3/1/bn/FusedBatchNormV3: conv2_3/1/conv2_3/1/bn/FusedBatchNormV3:5
Cannot infer shape for conv3_1/bn/FusedBatchNormV3: conv3_1/bn/FusedBatchNormV3:5
Cannot infer shape for conv3_1/1/conv3_1/1/bn/FusedBatchNormV3: conv3_1/1/conv3_1/1/bn/FusedBatchNormV3:5
Cannot infer shape for conv3_3/bn/FusedBatchNormV3: conv3_3/bn/FusedBatchNormV3:5
Cannot infer shape for conv3_3/1/conv3_3/1/bn/FusedBatchNormV3: conv3_3/1/conv3_3/1/bn/FusedBatchNormV3:5
Cannot infer shape for conv4_1/bn/FusedBatchNormV3: conv4_1/bn/FusedBatchNormV3:5
Cannot infer shape for conv4_1/1/conv4_1/1/bn/FusedBatchNormV3: conv4_1/1/conv4_1/1/bn/FusedBatchNormV3:5
Cannot infer shape for conv4_3/bn/FusedBatchNormV3: conv4_3/bn/FusedBatchNormV3:5
Cannot infer shape for conv4_3/1/conv4_3/1/bn/FusedBatchNormV3: conv4_3/1/conv4_3/1/bn/FusedBatchNormV3:5
Cannot infer shape for fc1/fc1/bn/FusedBatchNormV3: fc1/fc1/bn/FusedBatchNormV3:5
Loading ONNX file from path model.onnx…
Beginning ONNX file parsing
[TensorRT] VERBOSE: map/Shape:0:Shape → (4)
Completed parsing of ONNX file
[TensorRT] ERROR: Unused Input: images:0
Traceback (most recent call last):
File “build_engine.py”, line 83, in
main()
File “build_engine.py”, line 73, in main
buf = engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’
Environment
TensorRT Version:: 6.0.1.10 : GPU Type:: Pascal Nvidia Driver Version : CUDA Version: v10.0 CUDNN Version: 7.6.3.28 Operating System + Version: aarch64, Jetson TX2 Python Version (if applicable): 3.6 TensorFlow Version (if applicable): PyTorch Version (if applicable): 1.15 Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
WARNING: ONNX model has a newer ir_version (0.0.6) than this parser was built against (0.0.3).
While parsing node number 0 [Cast → “fc1/fc1/bn/Reshape__129:0”]:
— Begin node —
input: “fc1/fc1/bn/Reshape/shape:0”
output: “fc1/fc1/bn/Reshape__129:0”
name: “fc1/fc1/bn/Reshape__129”
op_type: “Cast”
attribute {
name: “to”
i: 7
type: INT
}
— End node —
ERROR: ModelImporter.cpp:296 In function importModel:
[5] Assertion failed: tensors.count(input_name)
[05/01/2020-12:27:45] [E] Failed to parse onnx file
[05/01/2020-12:27:45] [E] Parsing model failed
[05/01/2020-12:27:45] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=model1.onnx --explicitBatch --verbose
Summary
This text will be hidden
I guess this is because cast operation is not supported in Tensorrt. But i guess cast is a very common operator. Is there a plugin already defined for this I can use ?
Hi @rms45
The original model.onnx looks to be an invalid model - it does not run in ONNXRuntime either.
The updated model1.onnx runs both in TRT and ONNXRT - so it looks to be a model issue that has been fixed.