Tlt resnet18 performance drop between .tlt inference and .engine

More, in other topic mentioned above, the end user can run inference well with TLT classification model. So, please refer to the config file https://forums.developer.nvidia.com/uploads/short-url/rk4x7xqir6N1nl3QpfxBcTTE6FA.txt in Issue with image classification tutorial and testing with deepstream-app - #21 by Morganh to narrow down.
For example, process-mode=1 etc.