how to inference a net on tensorrt?

hi

As far as i know there are two ways to inferece a tensorflow net on tensorrt;

  1. Build tensorflow with tensorrt, tensorrt will resave tensorflow model–a light model, then using tensorflow to inference the resaved net.
  2. Transform tensorflow model to uff, then using tensorrt to load uff and inference.

Now my work is using tensorrt to accelerate the net. which way is that you recommended, and why that way?

br.
thanks.

Hi,

TensorFlow models can be deployed in following ways:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-developer-guide/index.html#working_tf

You can use any of the approaches to accelerate your network.

Thanks