I have two TLT models trained using same dataset and architecture.
The first model is trained in TLT 1.0 and deployed on some Xaviers running Deepstream 4.X
Now I am trying to upgrade to Deepstream 6.0.1 but when I run the TLT1.0 model in Deepstream 6.0.1, I notice the FPS is significantly lower and the power consumption is very high (also the temperature jumps up by 10-12 degrees within few seconds) whereas if I use the TAO 3.0 model, the FPS is higher and power consumption is almost half.
Is this an expected behavior? I need to use the TLT 1.0 model because I am unable to get the same training results using TAO 3.0.
Hardware: AGX Xavier
Network Type: DetectNet_V2 (With Resnet18 backbone)
TLT Version: 3.0
Deepstream Version: 6.0.1
TLT 1.0 + Deepstream 6.0.1
TAO 3.0 + Deepstream 6.0.1