As there is no python API support in windows machines for tensorrt, could you suggest some approach to do tensorrt inference in windows machine in the following environment. It’s also found that uff support is not available in tensorflow 2.0. converting to the onxx is the only option?
Hi,
Thanks for the reply. For the inference purpose,in order to use tensorrt onxx parser need to import tensorrt. While doing import tensorrt in tensorflow 2.0, ModuleNotFoundError: No module named 'tensorrt’error occured.
Is it because uff support isssue in tensorflow 2.0 because of ‘GraphDef’. Is there any other alternative or need to use tensorrt c++ api in windows.
Thanks for the reply. TensorRT is already installed in my system.Thanks for the links you provided for the conversion. Now I am able to convert models to onnx. As python API support is not available in windows machines, that’s the reason “import tensorrt” failed I guess. Could you please confirm whether this assumption is correct. Could I go with the tensorrt C++ API in windows machines.