Recently I am experiencing an [TRT] [E] 1: Unexpected exception (see attached trtinference_fp16_failure.txt) when running a script (attached below) doing tensorrt inferences. I created the engine using trtexec (see below). When creating the engine without the “–fp16” flag I get no exception and everything works.
I am running on Jetpack 4.6.2 with TensorRT 8.2.1.8.
Any ideas or recommendations?
Thanks!
Steps to reproduce the issue: Pull docker image: docker pull allu1234/pysot-xavier-torch19:1.0
I could run the model with trtexec directly, so I have reworked my python code for allocating the buffers and executing inference based on your given example and now it works. So the issue was in my python implementation.