Error running tensorrt_server in NVIDIA docker image with ResNet-50 model

Hi folks,

I am trying to run the tensorrt_server program in NVIDIA’s nvcr.io/nvidia/tensorrt:17.12 docker image with a ResNet-50 model. That docker image includes an example ResNet-152 frozen model and shell script under the “/workspace/tensorrt_server” directory. I am trying to replace these with a frozen copy of the ResNet-50 v1 model from the TensorFlow models repository: http://download.tensorflow.org/models/official/resnet_v1_imagenet_checkpoint.tar.gz

The exact .pb file I am using may be found here: https://fpgastorage.blob.core.windows.net/filecatalog/sample10k/temp/resnet_v1_50_frozen.pb

I modified the sample shell script used to invoke tensorrt_server in (I believe) an appropriate way, and you may find that file here: https://fpgastorage.blob.core.windows.net/filecatalog/sample10k/temp/tensorflow_resnetv1_50

Unfortunately, when I run this shell script to invoke tensorrt_server, I get an error message during conversion of the TensorFlow model to UFF. The full error message is here: https://fpgastorage.blob.core.windows.net/filecatalog/sample10k/temp/error.txt

The operative part of the stack trace seems to be:

Traceback (most recent call last):
  File "/opt/uff/uff/bin/convert_to_uff.py", line 109, in <module>
    main()
  File "/opt/uff/uff/bin/convert_to_uff.py", line 104, in main
    output_filename=args.output
  File "/opt/uff/uff/converters/tensorflow/conversion_helpers.py", line 103, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, **kwargs)
  File "/opt/uff/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow
    name="main")
  File "/opt/uff/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/opt/uff/uff/converters/tensorflow/converter.py", line 51, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
  File "/opt/uff/uff/converters/tensorflow/converter.py", line 28, in convert_layer
    fields = cls.parse_tf_attrs(tf_node.attr)
  File "/opt/uff/uff/converters/tensorflow/converter.py", line 177, in parse_tf_attrs
    for key, val in attrs.items()}
  File "/opt/uff/uff/converters/tensorflow/converter.py", line 177, in <dictcomp>
    for key, val in attrs.items()}
  File "/opt/uff/uff/converters/tensorflow/converter.py", line 172, in parse_tf_attr_value
    return cls.convert_tf2uff_field(code, val)
  File "/opt/uff/uff/converters/tensorflow/converter.py", line 157, in convert_tf2uff_field
    'type': 'dtype', 'list': 'list'}
KeyError: 'shape'

Would anyone here know what is going wrong? Is it something I should file as an issue under the TensorFlow repository, or does the “blame” lie with TensorRT? Is there a known-working means to load a trained ResNet-50 model in TensorRT? Thanks very much for your help!

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth