Error running tensorrt_server in NVIDIA docker image with ResNet-50 model

Hi folks,

I am trying to run the tensorrt_server program in NVIDIA’s docker image with a ResNet-50 model. That docker image includes an example ResNet-152 frozen model and shell script under the “/workspace/tensorrt_server” directory. I am trying to replace these with a frozen copy of the ResNet-50 v1 model from the TensorFlow models repository:

The exact .pb file I am using may be found here:

I modified the sample shell script used to invoke tensorrt_server in (I believe) an appropriate way, and you may find that file here:

Unfortunately, when I run this shell script to invoke tensorrt_server, I get an error message during conversion of the TensorFlow model to UFF. The full error message is here:

The operative part of the stack trace seems to be:

Traceback (most recent call last):
  File "/opt/uff/uff/bin/", line 109, in <module>
  File "/opt/uff/uff/bin/", line 104, in main
  File "/opt/uff/uff/converters/tensorflow/", line 103, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, **kwargs)
  File "/opt/uff/uff/converters/tensorflow/", line 75, in from_tensorflow
  File "/opt/uff/uff/converters/tensorflow/", line 64, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/opt/uff/uff/converters/tensorflow/", line 51, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
  File "/opt/uff/uff/converters/tensorflow/", line 28, in convert_layer
    fields = cls.parse_tf_attrs(tf_node.attr)
  File "/opt/uff/uff/converters/tensorflow/", line 177, in parse_tf_attrs
    for key, val in attrs.items()}
  File "/opt/uff/uff/converters/tensorflow/", line 177, in <dictcomp>
    for key, val in attrs.items()}
  File "/opt/uff/uff/converters/tensorflow/", line 172, in parse_tf_attr_value
    return cls.convert_tf2uff_field(code, val)
  File "/opt/uff/uff/converters/tensorflow/", line 157, in convert_tf2uff_field
    'type': 'dtype', 'list': 'list'}
KeyError: 'shape'

Would anyone here know what is going wrong? Is it something I should file as an issue under the TensorFlow repository, or does the “blame” lie with TensorRT? Is there a known-working means to load a trained ResNet-50 model in TensorRT? Thanks very much for your help!

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.