Issue with TensorRT conversion failure for large input resolution ONNX model

Software Version
DRIVE OS 6.0.8.1

Target Operating System
Linux

I am experiencing an issue when converting an ONNX model to a TensorRT engine using trtexec. The conversion fails when the input resolution is large.

Issue Description
I attempted to convert an ONNX model with an input resolution of width 1600 and height 800 using trtexec. The conversion failed with the following error message:

Could not find any implementation for node /AveragePool_1.
[optimizer.cpp::computeCosts::3897] Error Code 10: Internal Error (Could not find any implementation for node /AveragePool_1.)
Engine could not be created from network
Building engine failed

■Relevant Logs Around /AveragePool_1 Layer
=============== Computing costs for /AveragePool_1
*************** Autotuning format combination: Float(400,400,400,1) → Float(200,200,200,1) ***************
Skipping CudnnPooling: No valid tactics for /AveragePool_1
--------------- Timing Runner: /AveragePool_1 (CaskPooling[0x8000002f])
Skipping tactic 0x933eceba7b866d59 due to exception Cask Pooling Runner Execute Failure
Skipping tactic 0xba33c80addb15739 due to exception Cask Pooling Runner Execute Failure
/AveragePool_1 (CaskPooling[0x8000002f]) profiling completed in 0.118985 seconds. Fastest Tactic: 0xd15ea5edd15ea5ed Time: inf
*************** Autotuning format combination: Float(400,1:4,400,1) → Float(200,1:4,200,1) ***************
--------------- Timing Runner: /AveragePool_1 (CaskPooling[0x8000002f])
Skipping tactic 0xfab3e2ee1c085a9a due to exception Cask Pooling Runner Execute Failure
/AveragePool_1 (CaskPooling[0x8000002f]) profiling completed in 0.229153 seconds. Fastest Tactic: 0xd15ea5edd15ea5ed Time: inf

■Questions
1.The logs show multiple occurrences of the message “Cask Pooling Runner Execute Failure” related to the /AveragePool_1 layer. Is this the direct cause of the conversion failure?

2.When I reduce the input resolution to width 1600 and height 384, the conversion succeeds. Can I conclude that the large input resolution is the cause of the failure?
If so, are there any known limitations on input resolution size when converting models using trtexec?

Dear @sota.sato.ay ,
Can you share model to repro the issue? If you are using trtexec tool, please check increasing memPoolSize to see if it fixes

Dear SivaRamaKrishnaNV,

Thank you for your response.

Unfortunately, due to security reasons, I am unable to share the model.

We have tried increasing the memory pool size by specifying the --memPoolSize=workspace:8192 and --memPoolSize=workspace:4096 options in trtexec, but the conversion still fails with the same error.

Aside from increasing the memory pool size, are there any other steps or suggestions you recommend to resolve this issue?

I appreciate your continued support.

Could you please provide any update for this topic?

Dear @sota.sato.ay ,
Is it possible to share a dummy model which can repro the issue?
There is no update from you for a period, assuming this is not an issue anymore.
Hence, we are closing this topic. If need further support, please open a new one.
Thanks

Could you please provide any update for this topic?

Dear carolyuu,

Here are the current findings:

  1. We confirmed that conversion is possible up to a resolution of 1600x640.
    When checking which resolutions can be converted, 1600x640 was successful, but 1600x724 was not.
    Resolutions are set as multiples of 64 to account for alignment.

  2. We confirmed that by replacing the avg_pool2d operation with a conv2d operation, the model can be converted to TensorRT at a resolution of 1600x800.

This issue has been resolved, so we have no further information to provide.
I hope this information is helpful.