I have a Tensorflow model trained in Python on a Windows machine.
I plan to convert to UFF and do inference to optimize “execution time”.
I can read in other posts, that for Python samples and UFF converter, then install DEB package.
Does that mean TensorRT inference and the following execution/deployment can only be done from Linux?
Or how to install the UFF parser and TensorRT for Python in a Windows environment?
May I ask if you have used this library? In my test, I found that the pure infer time of this library was much slower than that of TensorRT C++, and even much slower than that of the original pytorch model. However, the predicted result was correct, which made me very confused.
Hi , UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.