Creating engine file of SSD Mobilenet v2 to run on Deepstream app

I am trying since ages to convert retrained SSD Mobilenet V2 frozen protobuf file to Tensor RT engine file that run successfully on deepstream sample app.

Can anyone please help me with the steps in detail ?

Hi,

May I know how do you convert the model into TensorRT engine?
It’s recommended to follow the steps shared in this example:
/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD/README

You may also need a customized config for the ssd_mobilenet_v2 model here:
https://github.com/AastaNV/TRT_object_detection/tree/master/config

Thanks.

I have downloaded pretrained SSD Mobilenet V2 model from here http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz

Used config file contained in tar itself i.e pipeline.config. Changed classes.

Trained on Tensorflow 1.14

Made pb (protobuf) file from retrained checkpoints using script export_inference_graph.py contained in
models/research/object_detection directory.

Used this command to create uff from pb file:
python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph_mobilenet_v2_3_classes.pb -O NMS -p config_TRT.py

Please find config_TRT as follows.

import graphsurgeon as gs

# path = 'model/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb'
# TRTbin = 'TRT_ssd_mobilenet_v2_coco_2018_03_29.bin'
# output_name = ['NMS']
# dims = [3,300,300]
# layout = 7
def preprocess(graph): #Added
    #def add_plugin(graph):
    all_assert_nodes = graph.find_nodes_by_op("Assert")
    graph.remove(all_assert_nodes, remove_exclusive_dependencies=True)
    all_identity_nodes = graph.find_nodes_by_op("Identity")
    graph.forward_inputs(all_identity_nodes)
    Input = gs.create_plugin_node(
        name="Input",
        op="Placeholder",
        shape=[1, 3, 300, 300]
    )
    PriorBox = gs.create_plugin_node(
        name="GridAnchor",
        op="GridAnchor_TRT",
        minSize=0.2,
        maxSize=0.95,
        aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
        variance=[0.1,0.1,0.2,0.2],
        featureMapShapes=[19, 10, 5, 3, 2, 1],
        numLayers=6
    )
    NMS = gs.create_plugin_node(
        name="NMS",
        op="NMS_TRT",
        shareLocation=1,
        varianceEncodedInTarget=0,
        backgroundLabelId=0,
        confidenceThreshold=1e-8,
        nmsThreshold=0.6,
        topK=100,
        keepTopK=100,
        numClasses=4,
        inputOrder=[1, 0, 2],
        confSigmoid=1,
        isNormalized=1
    )

    concat_priorbox = gs.create_node(
        "concat_priorbox",
        op="ConcatV2",
        axis=2
    )
    concat_box_loc = gs.create_plugin_node(
        "concat_box_loc",
        op="FlattenConcat_TRT",
    )
    concat_box_conf = gs.create_plugin_node(
        "concat_box_conf",
        op="FlattenConcat_TRT",
    )

    namespace_plugin_map = {
        "MultipleGridAnchorGenerator": PriorBox,
        "Postprocessor": NMS,
        "Preprocessor": Input,
        "ToFloat": Input,
        "image_tensor": Input,
        "Concatenate": concat_priorbox,
        "concat": concat_box_loc,
        "concat_1": concat_box_conf
    }

    graph.collapse_namespaces(namespace_plugin_map)
    graph.remove(graph.graph_outputs, remove_exclusive_dependencies=False)
    graph.find_nodes_by_op("NMS_TRT")[0].input.remove("Input")  
    # return graph

In /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD config_infer_primary_ssd.txt replaced uff file with one generated in above steps.

Run command: deepstream-app -c deepstream_app_config_ssd.txt

I got following error:

nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): UffParser: Validator error: Cast: Unsupported operation _Cast
0:00:03.508229534 19980 0x96ff8d0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Failed to parse UFF file: incorrect file or incorrect input/output blob names
0:00:03.508467142 19980 0x96ff8d0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files

Hi,

Our deepstream objectDetector_SSD sample targets for ssd_inception_v2_coco model.

You will need some update for the model ssd_mobilenet_v2_coco.
Please check if this config helps first:
[url]https://github.com/AastaNV/TRT_object_detection/blob/master/config/model_ssd_mobilenet_v2_coco_2018_03_29.py[/url]

Thanks.

Hello,AastaLLL.
I performed transfer learning on ssd_inception_v2_coco and got the pb file after training. I converted it into a uff file according to the README steps in Objectdection_SSD, but I also encountered this problem when running deepstream-app. Do you have a better solution?

hi Mr. miteshp.patel. Did you adapt the custom dataset trained tf model to deepstream?