Layer maxpool in dla fallback

Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.0
[y] DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
[y] Linux
QNX
other

Hardware Platform
[y] NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
[y] 1.6.0.8170
other

Host Machine Version
[y] native Ubuntu 18.04
other

I transform model into dla, the layer maxpool fall back to gpu. I can not find the reason.
the log: MaxPool_2: DLA does not support exclusive pooling.
So, what is exclusive pooling?

my maxpool layer prop in onnx is as fllowing:
type: MaxPool
name: MaxPool_2
ceil_mode: 0;
kernel_shape: 3,3
pads:1,1,1,1
strides:2,2

in tensorrt doc:
Pooling layer

  • Only two spatial dimension operations are supported.
  • Both FP16 and INT8 are supported.
  • Operations supported: kMAX, kAVERAGE.
  • Dimensions of the window must be in the range [1, 8].
  • Dimensions of padding must be in the range [0, 7].
  • Dimensions of stride must be in the range [1, 16].
  • Exclusive padding with kAVERAGE pooling is not supported.
  • With INT8 mode, input and output tensor scales must be the same.

Dear @wang_chen2,
Could you share your model?

Hi,
Sorry, company can not share the model

Dear @wang_chen2,
Could you share a simple model to reproduce the issue on my side.

HI,
Sorry, the net is isolated.
I can delete the maxPool

Please try to set to inclusive padding with AverageCountExcludesPadding(false).

Hi, @VickNV,
Thank you very much. The question is solved.