Failed to parse ONNX file on TensorRT5 C++ API but works on python API


I have a certain ONNX file that works on the python API (building, and inference) of TensorRT-5 but fails to be parsed on the C++ API. Thus I tried creating the engine on the python API and deserializing on the C++ API. When doing so, the parser could not find getResizeNearest plugin:

getPluginCreator could not find plugin ResizeNearest version 001 namespace

In a couple of posts, on the forum people had asked to include proper linkages. For the sake of completeness, I included and linked all the library static objects provided as part of the library. The issue persists the same.

Going through the headers I found no mention of the resize nearest or upsample that I need. Also found nothing in the changelogs.

Also would prefer to avoid going for a major version change to 6. Would cause a lot of breaking changes to our codebase.

How do I solve this issue?


Can you provide the following information so we can better help?
Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o TensorRT version

Also, if possible please share the script & model file to reproduce the issue.

Meanwhile, could you please try to use “trtexec” command to test the model.


Linux Distro : Ubuntu 16.04
GPU : GTX 1080 Ti
Driver version : 418.87.01
CUDA Version 10.0
CUDNN 7.3.1

Found that the ONNX parser that was packaged does not work, but the parser from the OSS is able to parse the ONNX and generate the engine.

Is the issue resolved by using the parser from the OSS?