Trouble with sampleUffSSD project

Running convert-to-uff tensorflow --input-file frozen_inference_graph.pb -O NMS -p config.py with the model downloaded from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md I get the error

Traceback (most recent call last):
  File "/home/user/.local/bin/convert-to-uff", line 11, in <module>
    sys.exit(main())
  File "/home/user/.local/lib/python3.5/site-packages/uff/bin/convert_to_uff.py", line 105, in main
    output_filename=args.output
  File "/home/user/.local/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 153, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/home/user/.local/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 74, in from_tensorflow
    quiet = False
  File "/home/user/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 77, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/home/user/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 59, in convert_tf2uff_node
    inp_node = tf_nodes[inp_name]
KeyError: 'image_tensor'

it appears to be in the graphdef at the beginning of the from_tensorflow call, but not after it is overwritten with graphdef = dynamic_graph.as_graph_def()

python3.5 -m pip show uff

Metadata-Version: 2.1
Name: uff
Version: 0.4.0

dpkg -l | grep TensorRT
ii graphsurgeon-tf 4.1.2-1+cuda9.0 amd64 GraphSurgeon for TensorRT package
ii libnvinfer-dev 4.1.2-1+cuda9.0 amd64 TensorRT development libraries and headers
ii libnvinfer4 4.1.2-1+cuda9.0 amd64 TensorRT runtime libraries
ii python3-libnvinfer 4.1.2-1+cuda9.0 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 4.1.2-1+cuda9.0 amd64 Python 3 development package for TensorRT
ii uff-converter-tf 4.1.2-1+cuda9.0 amd64 UFF converter for TensorRT package

Hi, I have the same problem. May I ask if you resolve this?

Looks like you need an older model: https://devtalk.nvidia.com/default/topic/1037256/tensorrt/sampleuffssd-conversion-fails-keyerror-image_tensor-/post/5269740/#5269740

Or change the config code like this:

namespace_plugin_map = {
    "MultipleGridAnchorGenerator": PriorBox,
    "Postprocessor": NMS,
    "Preprocessor": Input,
    # "ToFloat": Input,
    # "image_tensor": Input,
    "MultipleGridAnchorGenerator/Concatenate": concat_priorbox,
    "concat": concat_box_loc,
    "concat_1": concat_box_conf,
}

namespace_remove = {
    "ToFloat",
    "image_tensor",
    "Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3",
}

def preprocess(dynamic_graph):
    # remove the unrelated or error layers
    dynamic_graph.remove(dynamic_graph.find_nodes_by_path(namespace_remove), remove_exclusive_dependencies=False)

    # Now create a new graph by collapsing namespaces
    dynamic_graph.collapse_namespaces(namespace_plugin_map)
    # Remove the outputs, so we just have a single output node (NMS).
    dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)

    # Remove the Squeeze to avoid "Assertion `isPlugin(layerName)' failed"
    Squeeze = dynamic_graph.find_node_inputs_by_name(dynamic_graph.graph_outputs[0], 'Squeeze')
    dynamic_graph.forward_inputs(Squeeze)