Converting models to uff and building engines

Hi,

I am getting errors when building engines from custom trained ssd networks and would like to see how the conversion process can be done correctly.

I am able to convert the ssd_inception_v2 from 2017 and 2019 from the model zoo. However when using a custom trained version with just 1 class I get an error. I set num_classes=2 in the NMS layer. Is this the only change needed to the config.py needed for conversion when adjusting only the number of classes?

Here is my config.py to convert the :

import graphsurgeon as gs
import tensorflow as tf

Input = gs.create_node("Input",
    op="Placeholder",
    dtype=tf.float32,
    shape=[1, 3, 300, 300])
PriorBox = gs.create_plugin_node(name="MultipleGridAnchorGenerator", op="GridAnchor_TRT",
    numLayers=6,
    minSize=0.2,
    maxSize=0.95,
    aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
    variance=[0.1,0.1,0.2,0.2],
    featureMapShapes=[19, 10, 5, 3, 2, 1])
NMS = gs.create_plugin_node(name="NMS", op="NMS_TRT",
    shareLocation=1,
    varianceEncodedInTarget=0,
    backgroundLabelId=0,
    confidenceThreshold=1e-8,
    nmsThreshold=0.6,
    topK=100,
    keepTopK=100,
    numClasses=2,
    inputOrder=[0, 2, 1],
    confSigmoid=1,
    isNormalized=1)
concat_priorbox = gs.create_node(name="concat_priorbox", op="ConcatV2", dtype=tf.float32, axis=2)
concat_box_loc = gs.create_plugin_node("concat_box_loc", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)
concat_box_conf = gs.create_plugin_node("concat_box_conf", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)

namespace_plugin_map = {
    "Concatenate": concat_priorbox,
    "MultipleGridAnchorGenerator": PriorBox,
    "Postprocessor": NMS,
    "image_tensor": Input,
    "ToFloat": Input,
    "Preprocessor": Input,
    "concat": concat_box_loc,
    "concat_1": concat_box_conf
}

def preprocess(dynamic_graph):
    dynamic_graph.forward_inputs(dynamic_graph.find_nodes_by_op("Identity"))
    # Now create a new graph by collapsing namespaces
    dynamic_graph.collapse_namespaces(namespace_plugin_map)
    # Remove the outputs, so we just have a single output node (NMS).
    dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)
    # Disconnect the image_tensor node from NMS, as it expects to have only 3 inputs.
    dynamic_graph.find_nodes_by_op("NMS_TRT")[0].input.remove("Input")

The error I get is

[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/Reshape/shape
[TensorRT] INFO: UFFParser: parsing BoxPredictor_0/Reshape
[TensorRT] ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time

This works on the model from the model_zoo but the custom trained one throws this error when building the engine. The only difference between the models is the number of classes and maybe the tensorflow version used in training. Can this affect it?

I think https://devtalk.nvidia.com/default/topic/1043557/tensorrt/error-uffparser-parser-error-boxpredictor_0-reshape-reshape-1-dimension-specified-more-than-1-time/ might be a related issue.

Thank you.

Hello,

Can you provide details on the platforms you are using?

CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version

I’ll try to repro the problem.
Thanks.