Trying to the C++ quickstart example running. following the instructions here, the first step is to get a working tensorRT engine, from an onnx file. Converting the mode with trtexec fails.
I am trying to use an NVIDIA product out-of-the-box and it doesn’t work. I guess the NVIDIA developer is the one who needs to look at the Supported ONNX Operators not me.
If the model fcn-resnet101.onnx, is not supported by tensorrt, remove it from the quickstart example and provide something that works
Please provide me instructions on how to get the tensorrt quickstart c++ example working.
We recommend you to please try on the latest TensorRT version 8.4 EA.
If you still face this issue please file this issue on Issues · NVIDIA/TensorRT · GitHub to get better help.
This was bad advise. On the one hand the problem persists (I already posted the problem at the github, with no reply yet…) while on the other hand, all the tensorrt engines created with tao-converter STOPPED WORKING, and there is no version of tao-converter compatible with tensorRT 8.4…
To fix this problem just add the workspace size with --workspace=4096 option. This because the workspace is not enough for tensorrt 8.X.
Here list a example of the changed cmd:
trtexec --onnx=fcn-resnet101.onnx --fp16 --workspace=4096 --minShapes=input:1x3x256x256 --optShapes=input:1x3x1026x1282 --maxShapes=input:1x3x1440x2560 --buildOnly --saveEngine=fcn-resnet101.engine
Thanks to jasxu-nvidia 's comments
reference from Quick Start, Unable to prepare engine · Issue #1965 · NVIDIA/TensorRT · GitHub