Linux distro and version: Ubuntu16.04

GPU type:P4

nvidia driver version

CUDA version: 10

CUDNN version: 7.4.1

Python version [if using python]: 3.5

Tensorflow version: 1.12

TensorRT version: 5.02

If Jetson, OS, hw versions:NA

Describe the problem

While build tensorrt model with tf-trt 1.12, the exported model’s shape info is lost. The exported model could run on TRTIS, but if I change the frozen model to Savedmodel, it will cause error both on TRTIS and TF-serving.

https://github.com/tensorflow/tensorrt.git

just run the tensorrt/tftrt/examples/image-classification/image_classification.py with int8 parameter like the follow command and get the frozen model from graphs directory.

python image_classification.py --model resnet_v1_50 --data_dir /data/ImageNetVal/ --precision fp32 --batch_size=1 --use_trt

the one frozen from tensorrt:

Tensor(“input:0”, shape=(?, 224, 224, 3), dtype=float32)

Tensor(“logits:0”, dtype=float32)

and the native one:

Tensor(“input:0”, shape=(?, 224, 224, 3), dtype=float32)

Tensor(“logits:0”, shape=(?, 1001), dtype=float32)

the difference is the shape in logits from tensorrt have been lost.

script as follow:

import tensorflow as tf

from tensorflow.contrib import tensorrt as trt

import os

import shutil

graph_pb = ‘frozen_graph_resnet_v1_50_1_fp32_1.pb’

with tf.gfile.GFile(graph_pb, “rb”) as f:

graph_def = tf.GraphDef()

graph_def.ParseFromString(f.read())

sigs = {}

with tf.Session(graph=tf.Graph()) as sess:

for n in graph_def.node:

if n.name == “logits”:

print(n)

if n.name == “input”:

print(n)