Error running dssd with deepstream

Hi

I am trying to deploy dssd model trained using TLT. I was able to generate engine from my trained model and then tried to run deepstream test app1 with trained dssd model…but get this error

Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Now playing: /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264

Using winsys: x11
Opening in BLOCKING MODE
0:00:16.190931245 13761 0x55bf358520 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test1/trt.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT Input 3x384x1248
1 OUTPUT kFLOAT NMS 1x200x7
2 OUTPUT kFLOAT NMS_1 1x1x1

0:00:16.192573226 13761 0x55bf358520 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test1/trt.engine
0:00:16.430537202 13761 0x55bf358520 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initResource() <nvdsinfer_context_impl.cpp:667> [UID = 1]: Detect-postprocessor failed to init resource because dlsym failed to get func NvDsInferParseCustomDSSDTLT pointer
ERROR: Infer Context failed to initialize post-processing resource, nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
ERROR: Infer Context prepare postprocessing resource failed., nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
0:00:16.745399610 13761 0x55bf358520 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:16.745473987 13761 0x55bf358520 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Running…
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Returned, stopping playback
Deleting pipeline


Here is my config file…


[property]
gpu-id=0
#net-scale-factor=0.0039215697906911373
net-scale-factor=1.0
offsets=103.939;116.779;123.68
tlt-model-key=aDdhc3VqdTQ4NzJiajllOTVuZmQ0dWFwYW86ODYzY2M5MjYtZWUzMS00NzkxLWJiNDEtY2E4NjhmMmYyYTYz
model-engine-file=trt.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/labels.txt
#input-dims=03;384;1248;0
uff-input-dims=03;384;1248;0
uff-input-blob-name=Input
#force-implicit-batch-dim=1
batch-size=1
model-color-format=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
num-detected-classes=2
cluster-mode=1
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomDSSDTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream_tlt_apps/nvdsinfer_customparser_dssd_tlt/libnvds_infercustomparser_dssd_tlt.so

[class-attrs-all]
threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0


Please help me identify the problem.

Thanks