Parser FusedBatchNorm error during conversion of frozen_inference_graph.pb to .uff

Environment
TensorRT 7.1.0
Jetson tx2

[TRT] UffParser: Parser error: MobilenetV2/Conv/BatchNorm/FusedBatchNorm: The input to the Scale Layer is required to have a minimum of 3 dimensions.

How to fix it?

Hi,

As mentioned in the log, it’s requires the input with at least 3 dimension:
https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#layers-matrix

In general, we don’t meet this issue for MobilenetV2.
May I know how do you generate the uff model first?
Which config.py file do you use?

Thanks.

Hi,
I converted the deeplabv3_mobilenetv2.pb which trained by myself to the .uff
the command as flollows:
python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py --input-file deeplabv3_mobilenetv2.pb -O ArgMax -o mobilenet_0707.uff -p config.py -t

config.py
import tensorrt as trt
import graphsurgeon as gs
import tensorflow as tf

TRT_LOGGER = trt.Logger()
trt.init_libnvinfer_plugins(TRT_LOGGER,’’)

Input = gs.create_node(“Input”,

  • op=“Placeholder”,*
  • dtype=tf.float32,*
  • shape=[1, 3, 525, 525])*

concat_box_loc = gs.create_plugin_node(“concat_box_loc”, op=“FlattenConcat2_TRT”, dtype=tf.float32, axis=1, ignoreBatch=0)
concat_box_conf = gs.create_plugin_node(“concat_box_conf”, op=“FlattenConcat2_TRT”, dtype=tf.float32, axis=1, ignoreBatch=0)
resize = gs.create_plugin_node(“resize”, op=“ResizeBilinear_TRT”, dtype=tf.float32, axis=1)
space_to_batch = gs.create_plugin_node(“space_to_batch”, op=“SpaceToBatchND_TRT”, dtype=tf.float32, axis=1)
batch_to_space = gs.create_plugin_node(“batch_to_space”, op=“BatchToSpaceND_TRT”, dtype=tf.float32, axis=1)

namespace_plugin_map = {

  • “ImageTensor”: Input,*
  • “concat”: concat_box_loc,*
  • “concat_1”: concat_box_conf,*
  • “ResizeBilinear”:resize,*
  • “ResizeBilinear_1”:resize,*
  • “ResizeBilinear_2”:resize,*
  • “MobilenetV2/expanded_conv_14/depthwise/depthwise/SpaceToBatchND”:space_to_batch,*
  • “MobilenetV2/expanded_conv_14/depthwise/depthwise/BatchToSpaceND”:batch_to_space,*
  • “MobilenetV2/expanded_conv_15/depthwise/depthwise/SpaceToBatchND”:space_to_batch,*
  • “MobilenetV2/expanded_conv_15/depthwise/depthwise/BatchToSpaceND”:batch_to_space,*
  • “MobilenetV2/expanded_conv_16/depthwise/depthwise/SpaceToBatchND”:space_to_batch,*
  • “MobilenetV2/expanded_conv_16/depthwise/depthwise/BatchToSpaceND”:batch_to_space*
    }

def preprocess(dynamic_graph):

  • Now create a new graph by collapsing namespaces*

  • dynamic_graph.collapse_namespaces(namespace_plugin_map)*
  • Remove the outputs, so we just have a single output node (NMS).*

  • #dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)*

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

Sorry for the late update.

We try to reproduce this issue on our environment but can only find a pyTorch based deeplabv3_mobilenetv2 on the website.
Would you mind to share your model with us so we can check it directly?

Thanks.