Tensorrt convert efficientnet-b0 model published by nvidia github failed for out of range error

When I trying to convert a efficientnet-b0 model for trtexec to run. it failed. The model comes from nvidia github: DeepLearningExamples/PyTorch/Classification/ConvNets/efficientnet at master · NVIDIA/DeepLearningExamples · GitHub
The steps is as the following:
1# Download the pre-trained model: nvidia_efficientnet-b0_210412.onnx.
2# I run the " python model2onnx.py --arch efficientnet-b0 --pretrained-from-file nvidia_efficientnet-b0_210412.onnx -b 1 --trt True
onnx was generated,
3# Try trtexec, it failed with the command:
" trtexec --onnx=nvidia_efficientnet-b0_210412.onnx --explicitBatch --int8 --workspace=1024 --saveEngine=./effnet_b0_ws1024_gpu.engine"

The error said: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
terminate called after throwing an instance of ‘std::out_of_range’
what(): Attribute not found: axes
Aborted (core dumped)

I am using the NGC container as point out in the github readme page.

Hi, Please refer to the below links to perform inference in INT8

Thanks!

Hi,

Which version of the TensorRT and Opset version are you using?

dpkg -l | grep nvinfer
ii libcublas-11-1 1.0 all libcublas packaging shim to work around libnvinfer package dependency issue
ii libcublas-dev-11-1 1.0 all libcublas-dev packaging shim to work around libnvinfer package dependency issue
ii libnvinfer-bin 7.2.2-1+cuda11.1 amd64 TensorRT binaries
ii libnvinfer-dev 7.2.2-1+cuda11.1 amd64 TensorRT development libraries and headers
ii libnvinfer-plugin-dev 7.2.2-1+cuda11.1 amd64 TensorRT plugin libraries and headers
ii libnvinfer-plugin7 7.2.2-1+cuda11.1 amd64 TensorRT plugin library
ii libnvinfer7 7.2.2-1+cuda11.1 amd64 TensorRT runtime libraries

not know how to get the Opset version, but I am using NGC containser with the command: nvidia-docker run --rm -it -v /mnt/imagenet:/imagenet --ipc=host nvidia_efficientnet

We recommend you to please use the latest TensorRT version 8.4 or the latest NGC container.
https://developer.nvidia.com/nvidia-tensorrt-8x-download

Thank you.