Optimising Mobilenet_v2 using TensorRT

Hi,

I am currently trying to optimise Mobilenet_v2 using TensorRT. I have been using the following project ( https://github.com/NVIDIA-Jetson/tf_to_trt_image_classification ) to attempt this.
I have currently edited the files to accomodate Mobilenet_v2, however I do not know what to assign to ‘output_names’ in the model_meta.py file. For mobilenet_v1 it is simply assigned ‘MobilenetV1/Logits/SpatialSqueeze’.
I tried to simply change it to ‘MobilenetV2/Logits/SpatialSqueeze’, but this is not right.
I get an error of “AssertionError: MobilenetV2/Logits/SpatialSqueeze is not in graph”, when I run models_to_frozen_graphs.py in the terminal.
I do not know where to look or find where ‘output_names’ is determined for mobilenet_v2.

Any help would be greatly appreciated.

Thanks!

I’ve seen a similar situation lately where the resolution was to add a label to the graph.

check https://devtalk.nvidia.com/default/topic/1032314/?comment=5251978

Does that help?

Hi ChrisGottbrath,

I had a look at the thread and tried to make the changes that sebastian8nelu stated, but it didnt work sadly.

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth