As far as i know there are two ways to inferece a tensorflow net on tensorrt;
- Build tensorflow with tensorrt, tensorrt will resave tensorflow model–a light model, then using tensorflow to inference the resaved net.
- Transform tensorflow model to uff, then using tensorrt to load uff and inference.
Now my work is using tensorrt to accelerate the net. which way is that you recommended, and why that way?