Trtexec int8 conversion failing with calibration data generated using qdq translator

Description

I am trying to convert onnx model to tensorrt egnine. I am using trtexec utility for doing this. Engine file should run in int8 so i generated a calibration file using qdqtranslator which converts qat model to ptq model. But when using the calibration file to convert to int8 , trtexec fails with error

[E] Error[3]: IExecutionContext::executeV2: Error Code 3: API Usage Error (Parameter check failed, condition: nullPtrAllowed. Tensor “input” is bound to nullptr, which is allowed only for an empty input tensor, shape tensor, or an output tensor associated with an IOuputAllocator.)

What could be causing this ?

Environment

TensorRT Version: 10.3.0
GPU Type: AGX Orin 64 GB development kit
Nvidia Driver Version:
CUDA Version: 12.6.10
CUDNN Version: 9.3.0
Operating System + Version: Jetpack 6.1
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): Baremetal

Cache file is attached here:
model.zip (763 Bytes)

Any updates here since this concerns the latest tensorrt version?