Tlt resnet18 performance drop between .tlt inference and .engine

For TLT classification model inference, there are 3 methods.

1st is : tlt classification inference. You already mention that it is running well.

2nd is : standalone python inference. I made some modification based on one customer’s code. See Inferring resnet18 classification etlt model with python . This end user can get the same result as tlt infer

3rd is: run inference with deepstream. Please see the solution (comment 21, 24, 32) in Issue with image classification tutorial and testing with deepstream-app

  Main change:
 -	Please the offset to 103.939;116.779;123.68
 -	Generate avi file with gstreamer
 -	Set “scaling-filter=5”