Getting wrong result after TLT-convert on jetson

Hi,
We are done classification training on 2080Ti and generate etlt and trt when we test result of generated trt ,It was give above 80% good result on video on 2080Ti but ,We convert etlt file in engine file via tlt-converter on xavier NX it give bad result on same video ,Please help us to resolve this problem and find the reason why result is bad on xavier NX below is my some configuration

Deepstream 5.0

Training Env : 2080Ti , TLT container nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3

Testing Env : jetson Xavier NX, jetpack 4.4

command using for tlt convet (tlt_7.1) on jetson xavier NX:

./tlt-converter -k 'abcd1234' -c age_classification.bin -d 3,224,224 -o predictions/Softmax -e ./age_classification_int8.engine -i nchw -m 64 -b 64 -t int8 ./age_classification.etlt

testing via deepstream 5.0 on both Env (2080Ti and xavier NX) and same config file using below is details of config file,

[property]
gpu-id=0
net-scale-factor=1
model-engine-file=./Model/age_classification_int8.engine         (For Xavier NX)
#model-engine-file=./Model/age_classification.trt         (for 2080 TI)
labelfile-path=./Model/age_classification.txt
batch-size=1
network-mode=1
input-object-min-width=0
input-object-min-height=0
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=0
classifier-threshold=0.50

Please config the int8 calibration file into the deepstream config file. Also add more parameters.
Refer to tlt user guide.
https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#intg_classification_model

[property]
gpu-id=0

offsets=123.67;116.28;103.53
model-color-format=1
batch-size=30

int8-calib-file=Path to optional INT8 calibration cache
labelfile-path=Path to classification_labels.txt
tlt-encoded-model=Path to Classification TLT model
tlt-model-key=Key to decrypt model
input-dims=c;h;w;0 # where c = number of channels, h = height of the model input, w = width of model input, 0: implies CHW format.
uff-input-blob-name=input_1
output-blob-names=predictions/Softmax #output node name for classification

network-mode=1
process-mode=2
interval=0
network-type=1 # defines that the model is a classifier.
gie-unique-id=1
classifier-threshold=0.2