Tenasorflow 2.0 model tensorrt inference in windows machines

As there is no python API support in windows machines for tensorrt, could you suggest some approach to do tensorrt inference in windows machine in the following environment. It’s also found that uff support is not available in tensorflow 2.0. converting to the onxx is the only option?

OS: Windows 10
Framework: Tensorflow 2.0 (with gpu support)
NVIDIA Driver version: 441.41

Hi,

You can try tf2onnx + ONNX parser as an alternative. Any layer that are not supported needs to be replaced by custom plugin.

https://github.com/onnx/onnx-tensorrt/blob/master/operators.md

Thanks

Hi,
Thanks for the reply. For the inference purpose,in order to use tensorrt onxx parser need to import tensorrt. While doing import tensorrt in tensorflow 2.0, ModuleNotFoundError: No module named 'tensorrt’error occured.
Is it because uff support isssue in tensorflow 2.0 because of ‘GraphDef’. Is there any other alternative or need to use tensorrt c++ api in windows.

Thanks

Hi,

Could you please check if TensorRT is installed in your system?

Please refer below link for Windows installation:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-700/tensorrt-install-guide/index.html#installing-zip

Thanks

Hi,

Thanks for the reply. TensorRT is already installed in my system.Thanks for the links you provided for the conversion. Now I am able to convert models to onnx. As python API support is not available in windows machines, that’s the reason “import tensorrt” failed I guess. Could you please confirm whether this assumption is correct. Could I go with the tensorrt C++ API in windows machines.

Thanks for your valuable support

Hi,

Yes, python API is not supported in Windows system. You need to use C++ API.

Please refer below link for support matrix:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-700/tensorrt-support-matrix/index.html#platform-matrix

Thanks

Thanks for the valuable support