The issue about: Assertion `numPriors * param.numClasses == inputDims[param.inputOrder[1]].d[0]' failed.

Hi everyone!
I encountered the follow problem when I have converted the .uff file form .pb file and then to do inference,My ssd_mobilenet_v2 is trained from VOC dataset with the old version Tensorflow Object Detection API.

[TensorRT] INFO: Detected 1 input and 2 output network tensors.
python: nmsPlugin.cpp:140: virtual void nvinfer1::plugin::DetectionOutput::configureWithFormat(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, nvinfer1::DataType, nvinfer1::PluginFormat, int): Assertion `numPriors * param.numClasses == inputDims[param.inputOrder[1]].d[0]' failed.
Aborted (core dumped)

The NMS part as follow in the .pbtxt file When I converted the .uff file

graphs {
  id: "main"
  nodes {
    id: "NMS"
    inputs: "Squeeze"
    inputs: "concat_priorbox"
    inputs: "concat_box_loc"
    operation: "_NMS_TRT"
    fields {
      key: "backgroundLabelId_u_int"
      value {
        i: 0
      }
    }
    fields {
      key: "confSigmoid_u_int"
      value {
        i: 1
      }
    }
    fields {
      key: "confidenceThreshold_u_float"
      value {
        d: 1e-08
      }
    }
    fields {
      key: "dtype"
      value {
        dtype: DT_FLOAT32
      }
    }
    fields {
      key: "inputOrder_u_ilist"
      value {
        i_list {
          val: 0
          val: 2
          val: 1
        }
      }
    }
    fields {
      key: "isNormalized_u_int"
      value {
        i: 1
      }
    }
    fields {
      key: "keepTopK_u_int"
      value {
        i: 100
      }
    }
    fields {
      key: "nmsThreshold_u_float"
      value {
        d: 0.6
      }
    }
    fields {
      key: "numClasses_u_int"
      value {
        i: 20
      }
    }
    fields {
      key: "shareLocation_u_int"
      value {
        i: 1
      }
    }
    fields {
      key: "topK_u_int"
      value {
        i: 100
      }
    }
    fields {
      key: "varianceEncodedInTarget_u_int"
      value {
        i: 0
      }
    }
  }

And my config.py file like this

import graphsurgeon as gs
import tensorflow as tf

path = 'model/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb'
TRTbin = 'TRT_ssd_mobilenet_v2_coco_2018_03_29.bin'
output_name = ['NMS']
dims = [3,300,300]
layout = 7

def add_plugin(graph):
    all_assert_nodes = graph.find_nodes_by_op("Assert")
    graph.remove(all_assert_nodes, remove_exclusive_dependencies=True)

    all_identity_nodes = graph.find_nodes_by_op("Identity")
    graph.forward_inputs(all_identity_nodes)

    Input = gs.create_node(
        name="Input",
        op="Placeholder",
        dtype=tf.float32,
        shape=[1, 3, 300, 300]
    )

    PriorBox = gs.create_plugin_node(
        name="GridAnchor",
        op="GridAnchor_TRT",
        dtype=tf.float32,
        minSize=0.2,
        maxSize=0.95,
        aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
        variance=[0.1,0.1,0.2,0.2],
        featureMapShapes=[19, 10, 5, 3, 2, 1],
        numLayers=6
    )

    NMS = gs.create_plugin_node(
        name="NMS",
        op="NMS_TRT",
        dtype=tf.float32,
        shareLocation=1,
        varianceEncodedInTarget=0,
        backgroundLabelId=0,
        confidenceThreshold=1e-8,
        nmsThreshold=0.6,
        topK=100,
        keepTopK=100,
        numClasses=20,
	inputOrder=[0, 2, 1],
        #inputOrder=[1, 0, 2],
        confSigmoid=1,
        isNormalized=1,
        #scoreConverter="SIGMOID"
    )

    concat_priorbox = gs.create_node(
        name="concat_priorbox",
        op="ConcatV2",
        dtype=tf.float32,
        axis=2
    )

    concat_box_loc = gs.create_plugin_node(
        "concat_box_loc",
        op="FlattenConcat_TRT",
        dtype=tf.float32, 
        axis=1, 
        ignoreBatch=0
    )

    concat_box_conf = gs.create_plugin_node(
        "concat_box_conf",
        op="FlattenConcat_TRT",
        dtype=tf.float32, 
        axis=1, 
        ignoreBatch=0
    )

    namespace_plugin_map = {
        "MultipleGridAnchorGenerator": PriorBox,
        "Postprocessor": NMS,
        "Preprocessor": Input,
        "ToFloat": Input,
        "image_tensor": Input,
        "Concatenate": concat_priorbox,
        "concat": concat_box_loc,
        "concat_1": concat_box_conf
    }

    graph.collapse_namespaces(namespace_plugin_map)
    graph.remove(graph.graph_outputs, remove_exclusive_dependencies=False)
    graph.find_nodes_by_op("NMS_TRT")[0].input.remove("Input")
 
    return graph

Different from others is my NMS part in the .pbtxt like this:

inputs: "Squeeze"
    inputs: "concat_priorbox"
    inputs: "concat_box_loc"

Others like this:

inputs: "concat_box_conf"
    inputs: "Squeeze"
    inputs: "concat_priorbox"

And where can I find the nmsPlugin.cpp?
Any one can help me,thanks!!

Hi,

There are some naming different in the new TensorFlow API.
You can check this comment for information:
https://devtalk.nvidia.com/default/topic/1051455/jetson-nano/problems-with-ssd-mobilenet-v2-uff/post/5352128/#5352128

Thanks.

Hi,AastaLLL
I just know,the difference between NMS par of Mine from others is My model is ssdlite_mobilenet_v2,then how can I programming the config.py file when I use the ssdlite_mobilenet_v2 model,thanks very much.

Hi,AastaLLL
When I use the ssd_inception_v2_coco_2017_11_17 model download from tensorflow detection_model_zoo,I should use the follow config.py file,it can convert the .uff file and inference correctly

main pat of the config.py file:

# Create a mapping of namespace names -> plugin nodes.
    namespace_plugin_map = {
        "MultipleGridAnchorGenerator": PriorBox,
        "Postprocessor": NMS,
        "Preprocessor": Input,
        "ToFloat": Input,
        "image_tensor": Input,
        "MultipleGridAnchorGenerator/Concatenate": concat_priorbox,
        "MultipleGridAnchorGenerator/Identity": concat_priorbox,
        "concat": concat_box_loc,
        "concat_1": concat_box_conf
    }

And get the main NMS part in the .pbtxtfile like this:

inputs: "concat_box_loc"
    inputs: "concat_priorbox"
    inputs: "concat_box_conf"

When I use the ssd_mobilenet_v2_coco_2018_03_29 model from the zoo,I should use the follow config.py file it can run correctly.
main pat of the config.py file:

namespace_plugin_map = {
        "MultipleGridAnchorGenerator": PriorBox,
        "Postprocessor": NMS,
        "Preprocessor": Input,
        "ToFloat": Input,
        "image_tensor": Input,
        "Concatenate": concat_priorbox,
	"concat": concat_box_loc,
        "concat_1": concat_box_conf
    }

And get the main NMS part in the .pbtxtfile like this:

inputs: "concat_box_conf"
    inputs: "Squeeze"
    inputs: "concat_priorbox"

When I use the follow config.py file to myself trained ssd_mobilenet_v2 model which detect class number is 20,and use the ssd_mobilenet_v2_coco_2018_03_29 model as pretrained model,use the voc data set as train data,use the newly tensorflow object_detection API model-master.

main part of the config.py file:

namespace_plugin_map = {
        "MultipleGridAnchorGenerator": PriorBox,
        "Postprocessor": NMS,
        "Preprocessor": Input,
        "ToFloat": Input,
        "image_tensor": Input,
        "Concatenate": concat_priorbox,
	"concat": concat_box_loc,
        "concat_1": concat_box_conf
    }

It can convert the .pb file to .uff file,and get the follow main NMS part in the .pbtxt file:

nodes {
    id: "NMS"
    inputs: "Squeeze"
    inputs: "concat_priorbox"
    inputs: "concat_box_loc"

But cannot run the inference correctly,and get the follow errors

[TensorRT] INFO: Detected 1 input and 2 output network tensors.
python: nmsPlugin.cpp:140: virtual void nvinfer1::plugin::DetectionOutput::configureWithFormat(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, nvinfer1::DataType, nvinfer1::PluginFormat, int): Assertion `numPriors * param.numClasses == inputDims[param.inputOrder[1]].d[0]' failed.

I want to know how to programming the namespace_plugin_map = {} part in the config.py file according to different version of ssd tensorflow model,because I find the main difference in the config.py file is in the namespace_plugin_map = {} part,I am not find any document about the method how to programming the namespace_plugin_map = {} in the function

graph.collapse_namespaces(namespace_plugin_map)

I’ve been stuck with this problem for a week!!
can you give me some guidence? thanks!!

Hi
I have solved my problem,the problem lies in the “numClasses=21,” and “inputOrder=[0, 2, 1],” firstly the inputOrder=[0, 1, 2] should correctly according to the NMS part in the .pbtxt file,secondly,the numClasses should added 1 to the detecte class number,for example:your detecte class number is 20,you should added 1 for the background.

Hope this is helpful for ones who encountered the same problem as me!!!

I am facing a similar issue, I have trained ssd_inception_v2_coco model with 6 classes with dimensions(418, 418):
here in my temporary .pbtxt file, the order of NMS_TRT is

nodes {
    id: "NMS"
    inputs: "concat_priorbox"
    inputs: "Squeeze"
    inputs: "concat_box_conf"
    operation: "_NMS_TRT"

and I am providing parameters for conversion:

'ssd_inception_v2_coco': {
        'input_pb':   os.path.abspath(os.path.join(
                          DIR_NAME, 'ssd_inception_v2_coco.pb')),
        'tmp_uff':    os.path.abspath(os.path.join(
                          DIR_NAME, 'tmp_inception_v2_coco.uff')),
        'output_bin': os.path.abspath(os.path.join(
                          DIR_NAME, 'TRT_ssd_inception_v2_coco.bin')),        
        'num_classes': 7,
        'min_size': 0.2,
        'max_size': 0.95,        
        'input_order': [2, 0, 1], # for custom

and getting error in the last step as:-

DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:96] Marking ['NMS'] as outputs
No. nodes: 721
UFF Output written to /home/sys-admin/Downloads/Projects/tensorrt_demos/ssd/tmp_inception_v2_coco.uff
UFF Text Output written to /home/sys-admin/Downloads/Projects/tensorrt_demos/ssd/tmp_inception_v2_coco.pbtxt
[TensorRT] INFO: Detected 1 inputs and 2 output network tensors.
python3: nmsPlugin.cpp:139: virtual void nvinfer1::plugin::DetectionOutput::configureWithFormat(const nvinfer1::Dim
s*, int, const nvinfer1::Dims*, int, nvinfer1::DataType, nvinfer1::PluginFormat, int): Assertion `numPriors * numLocClasses * 4 == inputDims[param.inputOrder[0]].d[0]' failed.
./build_engines.sh: line 5: 19996 Aborted                 (core dumped) python3 build_engine.py ${model}

where I am doing wrong???

Resolved this issue by providing input_order = [1, 2, 0] but before that I again trained my model using models@6518c1c repo.

Hi you can try the ‘input_order’: [1, 2, 0],