The goal:

- convert a retrained ssd-inception-v2 tensorflow model to a TensorRT model.
- conversion and inference done on TX2
- training - laptop

I used the “ssd-inception-v2” model from tensorflow zoo, retrained it and now I wanted to convert to TRT.

The problem was that the converted model was not faster than the origin since the input dim is undefined (?, ?, ?, 3).

I tried to set the input size to a fixed size using the code below, but I get an error, as shown below.

Help is appreciated in what and how to change.

The error:

```
ValueError: node 'image_tensor' in input_map does not exist in graph (input_map entry: image_tensor:0->image_tensor:0)
```

The code:

```
with tf.gfile.GFile(frozen_graph_filename, "rb") as file_handle:
graph_def = tf.GraphDef()
graph_def.ParseFromString(file_handle.read())
new_input = tf.placeholder(dtype=tf.uint8, shape=[1, 320, 320, 3], name='image_tensor')
with tf.Graph().as_default() as frozen_graph:
# tf.import_graph_def(graph_def, name='') # <-- this works as expected
tf.import_graph_def(graph_def, input_map={'image_tensor:0': new_input})
# convert to TRT:
model_out = ['detection_classes', 'num_detections', 'detection_boxes', 'detection_scores']
trt_graph = trt.create_inference_graph(
input_graph_def=graph_def,
outputs=model_out,
max_batch_size=1,
max_workspace_size_bytes=max_work_space,
precision_mode=tensorRT_precision,
is_dynamic_op=False)
```