TRT parse_from_file API report on network input error while 'parse’ report True

Description

For any model I am using, before trying to parse it, first I check if it fully supported using the parser supports_model API and only if its returned status is True I move to the next step which is the activation of parse_from_file API which always return False.

I masked the call to supports_model and surprisingly the problem disappeared and the parse_from_file API started to return True.

When I reactivate supports_model API again before calling to the parse_from_file API the problem immediately return .

No matter if I activate the supports_model API, the parse API returned status is True and the execute_async_v2 service is working well.

When I declare two separated sets of: builder, network and parser, one for supports_model and second for inference execution flow (parse, engine and context creation and finally execution…) everything is working well.

Environment

TensorRT Version: 8.2.2.1
GPU Type: Quadro RTX 3000
Nvidia Driver Version: R471.11
CUDA Version: 11.2
CUDNN Version: 8.1.1
Operating System + Version: Windows 10
Python Version (if applicable): 3.6.8
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag): Baremetal

Relevant Files

None

Steps To Reproduce

Please see above description

My questions are:

  • Why and how the supports_model API impact on the parse_from_file API?

  • Why if I replace the parse_from_file API with parse API (not duplicated calls but actually replace between them), their returned statues are not the same? If I call to supports_model API before parse API its returned status is True as oppose to parse_from_file API returned status which is False in this scenario.
    My expectation is that both of them shall return the same status when the early conditions are the same……Am I right?

Thanks,

Hi,
Please check the below link, as they might answer your concerns

Thanks!

Thanks,
My case is totally based on the TRT SDK Python samples.
There is no sample which use the supports_model API or parse_from_file API, just the parse API.

Please take any one of the SDK samples such as onnx_resnet50.py and add supports_model call and replace the call to parse API with a call to parse_from_file API and you will produce the problem described above.

Thanks,

Hi,

We are tracking this internally. Our team is working on this issue.

Thank you.