Frozen_inference_graph.pb to .uff file

Hi,

Please Provide me the detailed instructions and necessary files to convert my frozen graph (frozen_inference_graph.pb) to uff file.

Can I do this conversion in jetson nano?

And can I use this uff file as a network model to run detectnet-camera.py for object detection?

Regards,
Shankar

Hi,

I am waiting on this… could you please reply on this.

Meanwhile I am referring to the below link
https://devtalk.nvidia.com/default/topic/1056054/jetson-tx2/how-to-retrain-ssd_inception_v2_coco_2017_11_17-from-the-tensorrt-samples/1

I tried converting frozen_inference_graph.pb to uff file, but getting the error,

AttributeError: module ‘model ssd_mobilenet_v1_coco_2018_01_28’ has no attribute ‘preprocess’

Let me know what is the error.

Regards,
Shankar

Hi,

The conversion can be applied directly on the Jetson Nano.
You can follow this sample for the procedures: /usr/src/tensorrt/samples/sampleUffSSD/

Jetson_inference do support ssd_mobilenet series model:
https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-console-2.md

If your model are using similar architecture, you can try to replace the uff file in jetson_inference directly.

Thanks.

Hi AastLLL,

I followed the below link and by using the command “python3.6 cconvert_to_uff.py --input-file /home/jetbot/models/research/object_detection/inference_graph/frozen_inference_graph.pb -O Postprocessor -p config.py” got the .uff file. But this .uff file is just 440 bytes, where as the frozen_inference_graph.pb file is 52.4MB. Attached below the config file which I used. Let me know what is wrong here.

https://devtalk.nvidia.com/default/topic/1056054/jetson-tx2/how-to-retrain-ssd_inception_v2_coco_2017_11_17-from-the-tensorrt-samples/2

Config.py File:-

import graphsurgeon as gs
import tensorflow as tf

path = ‘/home/jetbot/models/research/object_detection/inference_graph/frozen_inference_graph.pb’
TRTbin = ‘TRT_ssd_mobilenet_v1_coco_2018_01_28.bin’
output_name = [‘Postprocessor’]
dims = [3,300,300]
layout = 7

Input = gs.create_plugin_node(
name=“Input”,
op=“Placeholder”,
shape=[1, 3, 300, 300]
)

PriorBox = gs.create_plugin_node(
name=“MultipleGridAnchorGenerator”,
op=“GridAnchor_TRT”,
minSize=0.2,
maxSize=0.95,
aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
variance=[0.1,0.1,0.2,0.2],
featureMapShapes=[19, 10, 5, 3, 2, 1],
numLayers=6
)

Postprocessor = gs.create_plugin_node(
name=“Postprocessor”,
op=“NMS_TRT”,
shareLocation=1,
varianceEncodedInTarget=0,
backgroundLabelId=0,
confidenceThreshold=1e-8,
nmsThreshold=0.6,
topK=100,
keepTopK=100,
numClasses=7,
inputOrder=[0, 2, 1],
confSigmoid=1,
isNormalized=1,
scoreConverter=“SIGMOID”
)

concat_priorbox = gs.create_plugin_node(
“concat_priorbox”,
op=“ConcatV2”,
axis=2
)

concat_box_loc = gs.create_plugin_node(
“concat_box_loc”,
op=“FlattenConcat_TRT”,
)

concat_box_conf = gs.create_plugin_node(
“concat_box_conf”,
op=“FlattenConcat_TRT”,
)

namespace_plugin_map = {
“MultipleGridAnchorGenerator”: PriorBox,
“Postprocessor”: Postprocessor,
“Preprocessor”: Input,
“ToFloat”: Input,
“image_tensor”: Input,
“MultipleGridAnchorGenerator/Concatenate”: concat_priorbox,
“concat”: concat_box_loc,
“concat_1”: concat_box_conf
}

def preprocess(graph):
all_assert_nodes = graph.find_nodes_by_op(“Assert”)
graph.remove(all_assert_nodes, remove_exclusive_dependencies=True)
all_identity_nodes = graph.find_nodes_by_op(“Identity”)
graph.forward_inputs(all_identity_nodes)
print(" Operation done ")
graph.collapse_namespaces(namespace_plugin_map)
graph.remove(graph.graph_outputs, remove_exclusive_dependencies=False)
#graph.find_nodes_by_op(“NMS_TRT”)[0].input.remove(“Input”)
#graph.find_nodes_by_name(“Input”)[0].input.remove(“image_tensor:0”)

Hi AastLLL,

I have shared my frozen_inference_graph.pb, config.py and other required files in the below link.
Please see this and if possible to can you try with this.

https://drive.google.com/drive/folders/18YYUdcKfBvJVqaVSYvn7801g_JYrPc7k?usp=sharing

Regards,
Shankar

Hi AastLLL,

I am waiting solution for this,could you please reply soon.

Thank you,

Regards,
Shankar

Hi,

Sorry for keeping you waiting.

I have tried to convert your model into uff with this command and it works:

$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -o topic_1064501.uff -O NMS -p /usr/src/tensorrt/samples/sampleUffSSD/config.py

However, the layer number looks strange to me.
Are you using a ssd-mobilenet related model? May I know what is your output layer name?

Thanks.

Hi AastaLLL,

Yes I am using ssd-mobilenet-v2 model.

When you convert my model to uff, the generated topic_1064501.uff file is around 478bytes only?
Is this correct?

How did you verify the converted topic_1064501.uff file?

Hi,

The uff parser convert the graph from input to the given output node by shortest path algorithm.
The generated uff file doesn’t include other node but only NMS. It looks something wrong.

May I know the network architecture of your model?
Do you apply any change on the ssd-mobilenet-v2 model?

Based on your graph, the naming of model is changed.
Could you update the corresponding layer name into the config.py first?

Thanks.

hello, I am new in working with jetson nano! I am going to inference a .pb frozen model which is trained with resnet101 and faster rcnn on jetson nano. as I know first of all should convert it to uff format but when I run this command

$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -o topic_1064501.uff -O NMS -p /usr/src/tensorrt/samples/sampleUffSSD/config.py

I get this error: ModuleNotFoundError: No module named 'tensorflow'
I would like to know how can I solve it and also I want to know  where should I put exactly the frozen .pb file that I want to convert it to uff. now it is placed in downloads.

Hi barzegar.n,

Please help to open a new topic if it still an issue to support. Thanks