tensorflow + tensorRT: ConvertGraphDefToTensorRT usage c++

Can someone give an example of how to use ConvertGraphDefToTensorRT function?

Hi,

Is this a duplicate issue of topic 1049997?
https://devtalk.nvidia.com/default/topic/1049997/jetson-agx-xavier/tensorflow-r1-9-tensorrt-compilation-with-jetpack-4-2/

Or you already fix the installation issue?
Thanks.

While using tensorflow_cc.so, my c++ application not able to locate ConvertGraphDefToTensorRT function.

Which library it will be in?

I have tensorflow working without tensorRT.

I was not sure where to start, so asked question in two different ways.

Hi,

There are two possible way.

TF-TRT: https://github.com/NVIDIA-AI-IOT/tf_trt_models
Pure TensorRT: https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification

Thanks.

Dear AastaLL,

TF-TRT: https://github.com/NVIDIA-AI-IOT/tf_trt_models
Pure TensorRT: https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification

both these paths show how to use python for conversion. I am looking for c/c++ api.
thats why I am looking for ConvertGraphDefToTensorRT

Does it contain example test c++ program to convert tf graph to tensorRT

Hi,

We have both python and C++ implementation in TensorRT but only python based TensorFlow parser.
You will need to convert the model into .uff file in python interface first.

Please check the link shared above for the detail steps for converting the model.
Thanks.

Dear AastaLL,

This conversion using python can happen offline, OR it needs to happen on the fly, everytime we load model for inference?

Regards,

Hi,

You only do the conversion once.
After that, you can always create TensorRT with the .uff model directly.

Thanks.

Dear AastaLLL,

Thanks for updates.

So the steps can be:

  1. Train a tensorflow model
  2. Convert the model to uff using method u suggesting.
  3. Then use new model instead of tensorflow for inference.

Implementation of step3 can be similar to what it was if we use tensorflow model for inference?
Does GPU automatically understands uff model and use tensorRT?

Dear AaastaLLL,

I am using object detection in tensorflow model and using four different output node names.
how does conversion work in that case?
Also, I am working with Jetson Xavier.

Hi,

Please noticed that if not all the layers are supported by TensorRT, you will need to create a plugin implementation for it.
AFAIK, object detection model usually contains some non-supported layer.
It’s recommended to check our support matrix first:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html

To answer your question, you can add output as a list.
For example,

uff_model = uff.from_tensorflow_frozen_model(
            frozen_file=net_meta['frozen_graph_filename'],
            output_nodes=net_meta['output_names1','output_names2'],
            output_filename=net_meta['uff_filename'],
            text=False
        )