Parser error: The input to the Scale Layer is required to have a minimum of 3 dimensions

Description

Hi, all.
I am trying to follow this tutorial, https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification.
However, I faced the following error message in step ‘Convert frozen graph to TensorRT engine’.

nvidia@xavier:~/tf_to_trt_image_classification$ python3 scripts/convert_plan.py data/frozen_graphs/inception_v1.pb data/plans/inception_v1.plan input 224 224 InceptionV1/Logits/SpatialSqueeze 1 0 float
2020-08-04 20:24:15.431160: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py:227: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.7
=== Automatically deduced input nodes ===
[name: "input"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: -1
      }
      dim {
        size: 224
      }
      dim {
        size: 224
      }
      dim {
        size: 3
      }
    }
  }
}
]
=========================================

Using output node InceptionV1/Logits/SpatialSqueeze
Converting to UFF graph
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:96] Marking ['InceptionV1/Logits/SpatialSqueeze'] as outputs
No. nodes: 486
UFF Output written to data/tmp.uff
UFFParser: Parsing input[Op: Input].
UFFParser: input -> [3,224,224]
UFFParser: Applying order forwarding to: input
UFFParser: Parsing InceptionV1/Conv2d_1a_7x7/weights[Op: Const].
UFFParser: InceptionV1/Conv2d_1a_7x7/weights -> [7,7,3,64]
UFFParser: Applying order forwarding to: InceptionV1/Conv2d_1a_7x7/weights
UFFParser: Parsing InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D[Op: Conv]. Inputs: input, InceptionV1/Conv2d_1a_7x7/weights
UFFParser: Inserting transposes for InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D
Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D: kernel weights has count 9408 but 702464 was expected
InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D: count of 9408 weights in kernel, but kernel dimensions (7,7) with 224 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 224 * 7*7 * 64 / 1 = 702464
UFFParser: InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D -> []
UFFParser: Applying order forwarding to: InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D
UFFParser: Parsing InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/Const[Op: Const].
InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D: kernel weights has count 9408 but 702464 was expected
InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D: count of 9408 weights in kernel, but kernel dimensions (7,7) with 224 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 224 * 7*7 * 64 / 1 = 702464
UFFParser: InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/Const -> []
UFFParser: Applying order forwarding to: InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/Const
UFFParser: Parsing InceptionV1/Conv2d_1a_7x7/BatchNorm/beta[Op: Const].
InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D: kernel weights has count 9408 but 702464 was expected
InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D: count of 9408 weights in kernel, but kernel dimensions (7,7) with 224 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 224 * 7*7 * 64 / 1 = 702464
UFFParser: InceptionV1/Conv2d_1a_7x7/BatchNorm/beta -> []
UFFParser: Applying order forwarding to: InceptionV1/Conv2d_1a_7x7/BatchNorm/beta
UFFParser: Parsing InceptionV1/Conv2d_1a_7x7/BatchNorm/moving_mean[Op: Const].
InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D: kernel weights has count 9408 but 702464 was expected
InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D: count of 9408 weights in kernel, but kernel dimensions (7,7) with 224 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 224 * 7*7 * 64 / 1 = 702464
UFFParser: InceptionV1/Conv2d_1a_7x7/BatchNorm/moving_mean -> []
UFFParser: Applying order forwarding to: InceptionV1/Conv2d_1a_7x7/BatchNorm/moving_mean
UFFParser: Parsing InceptionV1/Conv2d_1a_7x7/BatchNorm/moving_variance[Op: Const].
InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D: kernel weights has count 9408 but 702464 was expected
InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D: count of 9408 weights in kernel, but kernel dimensions (7,7) with 224 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 224 * 7*7 * 64 / 1 = 702464
UFFParser: InceptionV1/Conv2d_1a_7x7/BatchNorm/moving_variance -> []
UFFParser: Applying order forwarding to: InceptionV1/Conv2d_1a_7x7/BatchNorm/moving_variance
UFFParser: Parsing InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/FusedBatchNormV3[Op: BatchNorm]. Inputs: InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D, InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/Const, InceptionV1/Conv2d_1a_7x7/BatchNorm/beta, InceptionV1/Conv2d_1a_7x7/BatchNorm/moving_mean, InceptionV1/Conv2d_1a_7x7/BatchNorm/moving_variance
InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D: kernel weights has count 9408 but 702464 was expected
InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D: count of 9408 weights in kernel, but kernel dimensions (7,7) with 224 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 224 * 7*7 * 64 / 1 = 702464
UffParser: Parser error: InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/FusedBatchNormV3: The input to the Scale Layer is required to have a minimum of 3 dimensions.
Failed to parse UFF

I tried to find a solution, but I couldn’t.

Any help will be very appreciated.

Thanks.

Environment

TensorRT Version: 7.1.0
nvidia@xavier:~/tf_to_trt_image_classification$ dpkg -l | grep TensorRT

ii  graphsurgeon-tf                               7.1.0-1+cuda10.2                                 arm64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                                7.1.0-1+cuda10.2                                 arm64        TensorRT binaries
ii  libnvinfer-dev                                7.1.0-1+cuda10.2                                 arm64        TensorRT development libraries and headers
ii  libnvinfer-doc                                7.1.0-1+cuda10.2                                 all          TensorRT documentation
ii  libnvinfer-plugin-dev                         7.1.0-1+cuda10.2                                 arm64        TensorRT plugin libraries
ii  libnvinfer-plugin7                            7.1.0-1+cuda10.2                                 arm64        TensorRT plugin libraries
ii  libnvinfer-samples                            7.1.0-1+cuda10.2                                 all          TensorRT samples
ii  libnvinfer7                                   7.1.0-1+cuda10.2                                 arm64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                          7.1.0-1+cuda10.2                                 arm64        TensorRT ONNX libraries
ii  libnvonnxparsers7                             7.1.0-1+cuda10.2                                 arm64        TensorRT ONNX libraries
ii  libnvparsers-dev                              7.1.0-1+cuda10.2                                 arm64        TensorRT parsers libraries
ii  libnvparsers7                                 7.1.0-1+cuda10.2                                 arm64        TensorRT parsers libraries
ii  nvidia-container-csv-tensorrt                 7.1.0.16-1+cuda10.2                              arm64        Jetpack TensorRT CSV file
ii  python-libnvinfer                             7.1.0-1+cuda10.2                                 arm64        Python bindings for TensorRT
ii  python-libnvinfer-dev                         7.1.0-1+cuda10.2                                 arm64        Python development package for TensorRT
ii  python3-libnvinfer                            7.1.0-1+cuda10.2                                 arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                        7.1.0-1+cuda10.2                                 arm64        Python 3 development package for TensorRT
ii  tensorrt                                      7.1.0.16-1+cuda10.2                              arm64        Meta package of TensorRT
ii  uff-converter-tf                              7.1.0-1+cuda10.2                                 arm64        UFF converter for TensorRT package

GPU Type: 512-Core NVIDIA Volta @ 1377MHz
Nvidia Driver Version: ?
CUDA Version: V10.2.89

nvidia@xavier:~/tf_to_trt_image_classification$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_21:14:42_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89

CUDNN Version: 8.0.0

nvidia@xavier:~/tf_to_trt_image_classification$ dpkg --list | grep libcudnn
ii  libcudnn8                                     8.0.0.145-1+cuda10.2                             arm64        cuDNN runtime libraries
ii  libcudnn8-dev                                 8.0.0.145-1+cuda10.2                             arm64        cuDNN development libraries and headers
ii  libcudnn8-doc                                 8.0.0.145-1+cuda10.2                             arm64        cuDNN documents and samples

Operating System + Version: ubuntu18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.15.2
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @yjkim2,
UFF conversion has been deprecated from TRT>=7.
The recommended way is to use
TF << ONNX << TRT
or
use TF-TRT

Please check the below links for reference.


https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#usingtftrt

Thanks!