Error running ResNet18 classifier in deepstream

I try to run an exported TLT model in deepstream as a secondary gpu inference engine but got error the following error:

ERROR: Deserialize engine failed because file path: /home/minh/myapps/test_apps/yolov3_app/./models/classifiers/stringy-10_classifier.etlt_b2_gpu0_fp16.engine open error
0:00:01.159060983 30993 0x558ce85000 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 3]: deserialize engine from file :/home/minh/myapps/test_apps/yolov3_app/./models/classifiers/stringy-10_classifier.etlt_b2_gpu0_fp16.engine failed
0:00:01.159286143 30993 0x558ce85000 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 3]: deserialize backend context from engine from file :/home/minh/myapps/test_apps/yolov3_app/./models/classifiers/stringy-10_classifier.etlt_b2_gpu0_fp16.engine failed, try rebuild
0:00:01.159427941 30993 0x558ce85000 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 3]: Trying to create engine from model files
ERROR: Uff input blob name is empty
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.160114367 30993 0x558ce85000 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 3]: build engine file failed
Bus error (core dumped)

The config file is:

[property]
gpu-id=0
net-scale-factor=1
labelfile-path=./models/classifiers/stringy_labels.txt
tlt-model-key=[key]
tlt-encoded-model=./models/classifiers/stringy-10_classifier.etlt
model-engine-file=./models/classifiers/stringy-10_classifier.etlt_b2_gpu0_fp16.engine
force-implicit-batch-dim=1
batch-size=2
#0=FP32 1=INT8 2=FP16 mode
network-mode=2
#input-object-min-width=64
#input-object-min-height=64
input-object-min-width=0
input-object-min-height=0
#1=Primary 2=Secondary
process-mode=2
#0=RGB 1=BGR 2=GRAY
model-color-format=1
infer-dims=3;360;360
workspace-size=10000
gpu-id=0
gie-unique-id=3
operate-on-gie-id=1
operate-on-class-ids=0
#0=Detector 1=Classifier 2=Segmentation 3=Instance Segmentation
network-type=1
maintain-aspect-ratio=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51
#scaling-filter=0
#scaling-compute-hw=0

Why did I get the following error? I didn’t pass in any UFF model at all.

ERROR: Uff input blob name is empty
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API

I want use deepstream to build the engine for me directly, it work for my detector model but fail for the classifier. How do I go about fixing those errors?

Your config file is missing

uff-input-blob-name=input_1