Power consumption difference bettwen TLT 1.0 and TAO 3.0 models during inference

I have two TLT models trained using same dataset and architecture.
The first model is trained in TLT 1.0 and deployed on some Xaviers running Deepstream 4.X
Now I am trying to upgrade to Deepstream 6.0.1 but when I run the TLT1.0 model in Deepstream 6.0.1, I notice the FPS is significantly lower and the power consumption is very high (also the temperature jumps up by 10-12 degrees within few seconds) whereas if I use the TAO 3.0 model, the FPS is higher and power consumption is almost half.
Is this an expected behavior? I need to use the TLT 1.0 model because I am unable to get the same training results using TAO 3.0.

Hardware: AGX Xavier
Network Type: DetectNet_V2 (With Resnet18 backbone)
TLT Version: 3.0
Deepstream Version: 6.0.1

Some observations:
TLT 1.0 + Deepstream 6.0.1


TAO 3.0 + Deepstream 6.0.1


Moving to TAO forum.

TLT1.0 model is an old version. When it was released, it is not for DS6.0.
Suggest you to use latest TAO.