How to convert SSD mobilenet v2 to uff,Then use uff in jetson_inference detectnet_camera script?

Step1:

Download SSD_mobilenet_v2_coco from tensorflow model zoo

http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz

Step2:Modify config.py in /usr/src/tensorrt/samples/sampleUffSSD

import graphsurgeon as gs
import tensorflow as tf

Input = gs.create_node("Input",
    op="Placeholder",
    dtype=tf.float32,
    shape=[1, 3, 300, 300])
PriorBox = gs.create_plugin_node(name="GridAnchor", op="GridAnchor_TRT",
    numLayers=6,
    minSize=0.2,
    maxSize=0.95,
    aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
    variance=[0.1,0.1,0.2,0.2],
    featureMapShapes=[19, 10, 5, 3, 2, 1])
NMS = gs.create_plugin_node(name="NMS", op="NMS_TRT",
    shareLocation=1,
    varianceEncodedInTarget=0,
    backgroundLabelId=0,
    confidenceThreshold=1e-8,
    nmsThreshold=0.6,
    topK=100,
    keepTopK=100,
    numClasses=91,
    ###########################################
    #inputOrder=[0, 2, 1],
    inputOrder=[1, 0, 2],
    ###########################################
    confSigmoid=1,
    isNormalized=1,
    scoreConverter="SIGMOID")
concat_priorbox = gs.create_node(name="concat_priorbox", op="ConcatV2", dtype=tf.float32, axis=2)
concat_box_loc = gs.create_plugin_node("concat_box_loc", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)
concat_box_conf = gs.create_plugin_node("concat_box_conf", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)

namespace_plugin_map = {
    "MultipleGridAnchorGenerator": PriorBox,
    "Postprocessor": NMS,
    "Preprocessor": Input,
    "ToFloat": Input,
    "image_tensor": Input,
    "MultipleGridAnchorGenerator/Concatenate": concat_priorbox,
    "concat": concat_box_loc,
    "concat_1": concat_box_conf
}

def preprocess(dynamic_graph):
    # Now create a new graph by collapsing namespaces
    dynamic_graph.collapse_namespaces(namespace_plugin_map)
    # Remove the outputs, so we just have a single output node (NMS).
    dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)

Step3:
sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py ~/test/TRT_object_detection/model/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb -o hello.uff -O NMS -p /usr/src/tensorrt/samples/sampleUffSSD/config.py

Step4:

./detectnet-camera --model=./networks/hello.uff --class_labels=./networks/tmp/ssd_coco_labels.txt

However,some errors:

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera 0
[gstreamer] gstCamera pipeline string:
nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=0 ! video/x-raw ! appsink name=mysink
nvbuf_utils: Could not get EGL display connection
[gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS, camera 0

detectnet-camera:  successfully initialized camera device
    width:  1280
   height:  720
    depth:  12 (bpp)


detectNet -- loading detection network model from:
          -- prototxt     NULL
          -- model        ./networks/hello.uff
          -- input_blob   'data'
          -- output_cvg   'coverage'
          -- output_bbox  'bboxes'
          -- mean_pixel   0.000000
          -- mean_binary  NULL
          -- class_labels ./networks/tmp/ssd_coco_labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]   TensorRT version 5.0.6
[TRT]   loading NVIDIA plugins...
[TRT]   completed loading NVIDIA plugins.
[TRT]   detected model format - UFF  (extension '.uff')
[TRT]   desired precision specified for GPU: FASTEST
[TRT]   requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]   native precisions detected for GPU:  FP32, FP16
[TRT]   selecting fastest native precision for GPU:  FP16
[TRT]   attempting to open engine cache file ./networks/hello.uff.1.1.GPU.FP16.engine
[TRT]   cache file not found, profiling network model on device GPU
[TRT]   device GPU, loading /home/jetbot/test/jetson-inference/build/aarch64/bin/ ./networks/hello.uff
[TRT]   FeatureExtractor/MobilenetV2/Conv/Relu6: elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [1,32,150,150] and [1,1,1])
[TRT]   FeatureExtractor/MobilenetV2/expanded_conv/depthwise/depthwise: at least three non-batch dimensions are required for input
[TRT]   UFFParser: Parser error: FeatureExtractor/MobilenetV2/expanded_conv/depthwise/BatchNorm/batchnorm/mul_1: The input to the Scale Layer is required to have a minimum of 3 dimensions.
[TRT]   failed to parse UFF model './networks/hello.uff'
[TRT]   device GPU, failed to load ./networks/hello.uff
detectNet -- failed to initialize.
detectnet-camera:   failed to load detectNet model

Any solutions about that?

Thanks

I really hope you can answer my question, Thank you!

I really hope you can answer my question, Thank you!

Hi,

The config shared in /usr/src/tensorrt/samples/sampleUffSSD/ is for ssd_inception_v2.
Could you try to use this config instead:
https://github.com/AastaNV/TRT_object_detection/blob/master/config/model_ssd_mobilenet_v2_coco_2018_03_29.py

Thanks.

Thank you for your response!

I modified model_ssd_mobilenet_v2_coco_2018_03_29.py, and Use this code to convert to uff,But the same error!!

import graphsurgeon as gs
import tensorflow as tf
import uff
def add_plugin(graph):
    all_assert_nodes=graph.find_nodes_by_op("Assert")
    graph.remove(all_assert_nodes,remove_exclusive_dependencies=True)
    all_identity_nodes=graph.find_nodes_by_op("Identity")
    graph.forward_inputs(all_identity_nodes)
    Input=gs.create_node("Input",op="Placeholder",dtype=tf.float32,shape=[1, 3, 300, 300])
    PriorBox = gs.create_plugin_node(name="GridAnchor", op="GridAnchor_TRT",numLayers=6,minSize=0.2,maxSize=0.95,aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],variance=[0.1,0.1,0.2,0.2],featureMapShapes=[19, 10, 5, 3, 2, 1])
    NMS = gs.create_plugin_node(name="NMS", op="NMS_TRT",shareLocation=1,varianceEncodedInTarget=0,backgroundLabelId=0,confidenceThreshold=1e-8,nmsThreshold=0.6,topK=100,keepTopK=100,numClasses=91,inputOrder=[1,0,2],confSigmoid=1,isNormalized=1)
    concat_priorbox = gs.create_node(name="concat_priorbox", op="ConcatV2" ,dtype=tf.float32,axis=2)
    concat_box_loc = gs.create_plugin_node("concat_box_loc", op="FlattenConcat_TRT",dtype=tf.float32,axis=1,ignoreBatch=0)
    concat_box_conf = gs.create_plugin_node("concat_box_conf", op="FlattenConcat_TRT",dtype=tf.float32,axis=1,ignoreBatch=0)
    namespace_plugin_map = {"MultipleGridAnchorGenerator": PriorBox,"Postprocessor": NMS,"Preprocessor": Input,"ToFloat": Input,"image_tensor": Input,"Concatenate": concat_priorbox,"concat": concat_box_loc,"concat_1": concat_box_conf}
   
    graph.collapse_namespaces(namespace_plugin_map)
   
    graph.remove(graph.graph_outputs, remove_exclusive_dependencies=False)
    graph.find_nodes_by_op("NMS_TRT")[0].input.remove("Input")
    return graph

dynamic_graph=add_plugin(gs.DynamicGraph("/home/jetbot/test/TRT_object_detection/model/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb"))
uff_model=uff.from_tensorflow(dynamic_graph.as_graph_def(),["NMS"],output_filename="tmp.uff")

Hi,

Thanks for your testing.
We are trying to reproduce this issue. Will update more information with you later.

Thanks.

Good News.

Thank you!

Hi,

We can run jetson_inference with ssd_mobilenet_v2_coco successfully.
Here are our steps for your reference:

1. Generate uff file

config.py

import graphsurgeon as gs
import tensorflow as tf

Input = gs.create_node("Input",
    op="Placeholder",
    dtype=tf.float32,
    shape=[1, 3, 300, 300])
PriorBox = gs.create_plugin_node(name="GridAnchor", op="GridAnchor_TRT",
    numLayers=6,
    minSize=0.2,
    maxSize=0.95,
    aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
    variance=[0.1,0.1,0.2,0.2],
    featureMapShapes=[19, 10, 5, 3, 2, 1])
NMS = gs.create_plugin_node(name="NMS", op="NMS_TRT",
    shareLocation=1,
    varianceEncodedInTarget=0,
    backgroundLabelId=0,
    confidenceThreshold=1e-8,
    nmsThreshold=0.6,
    topK=100,
    keepTopK=100,
    numClasses=91,
    ###########################################
    #inputOrder=[0, 2, 1],
    inputOrder=[1, 0, 2],
    ###########################################
    confSigmoid=1,
    isNormalized=1,
    scoreConverter="SIGMOID")
concat_priorbox = gs.create_node(name="concat_priorbox", op="ConcatV2", dtype=tf.float32, axis=2)
concat_box_loc = gs.create_plugin_node("concat_box_loc", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)
concat_box_conf = gs.create_plugin_node("concat_box_conf", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)

namespace_plugin_map = {
    "MultipleGridAnchorGenerator": PriorBox,
    "Postprocessor": NMS,
    "Preprocessor": Input,
    "ToFloat": Input,
    "image_tensor": Input,
#   "MultipleGridAnchorGenerator/Concatenate": concat_priorbox,
    "Concatenate": concat_priorbox,
    "concat": concat_box_loc,
    "concat_1": concat_box_conf
}

def preprocess(dynamic_graph):
    all_assert_nodes = dynamic_graph.find_nodes_by_op("Assert")
    dynamic_graph.remove(all_assert_nodes, remove_exclusive_dependencies=True)

    all_identity_nodes = dynamic_graph.find_nodes_by_op("Identity")
    dynamic_graph.forward_inputs(all_identity_nodes)

    dynamic_graph.collapse_namespaces(namespace_plugin_map)
    dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)
    dynamic_graph.find_nodes_by_op("NMS_TRT")[0].input.remove("Input")
sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py [/path/to/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb] -o ssd_mobilenet_v2.uff -O NMS -p ./config.py

2. Execute it with jetson_infernece:

./detectnet-camera <b>--network=ssd-mobilenet-v2</b> --model=./ssd_mobilenet_v2.uff --class_labels=./networks/ssd_coco_labels.txt

Thanks.

Sorry ,your code may be have some errors.

if use

--network=ssd-mobilenet-v2

,The detectnet-camera would use downloaded uff file from server,Not the file you generated.

check with detectnet-camera -help

Hello dusty_nv, Could you help me?

Sorry for the delay - to load custom UFF model with detectNet, you would need to either replace the files in /data/networks/SSD-Mobilenet-v2 or modify the code to change the paths. Loading custom UFF from command line isn’t supported by detectnet-console/detectnet-camera because UFF requires additional parameters.

See here in the code for how UFF model is loaded by detectNet:
https://github.com/dusty-nv/jetson-inference/blob/87b5a8814a60e860709b35f3f774907c249db081/c/detectNet.cpp#L258

So you could either change the paths for SSD-Mobilenet-v2 here, add a new enum for your own model, or simply replace the SSD-Mobilenet-v2 files on disk.

Hi, I’m trying to achieve the same thing. However couldn’t manage to convert my graph to .uff format. So what I have is a frozen_inference_graph.pb that was trained to detect the Jetson Nano board :), I also have a saved_model.pb so I’m not sure which .pb file I should use for conversion. This frozen graph was created using ssd_inception_v2_coco_2018_01_28 version of the model. Research tells me that I can convert .pb file to .uff using a conversion script which requires the model to convert (frozen_graph.pb) and its output node names that I couldn’t find. Can you help me out on this. I hope I gave enough information and if possible since I’m kind of new to this, be as explicit as possible, really appreciate it! Thanks!

Hi canozcivelek,

Please open a new topic for your issue. Thanks