Need tao_triton video inference

i tried with the parser of

GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream

but i got below errror now

ERROR: infer_postprocess.cpp:344 Detect-postprocessor failed to init resource because dlsym failed to get func NvDsInferParseCustomBatchedNMSTLT pointer
0:00:00.390680778 20514 0x448e2d0 ERROR nvinferserver gstnvinferserver.cpp:361:gst_nvinfer_server_logger: nvinferserver[UID 1]: Error in allocateResource() <infer_cuda_context.cpp:519> [UID = 1]: failed to allocate resource for postprocessor., nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
0:00:00.390708228 20514 0x448e2d0 ERROR nvinferserver gstnvinferserver.cpp:361:gst_nvinfer_server_logger: nvinferserver[UID 1]: Error in initialize() <infer_base_context.cpp:109> [UID = 1]: Failed to allocate buffers
0:00:00.390745052 20514 0x448e2d0 WARN nvinferserver gstnvinferserver_impl.cpp:510:start: error: Failed to initialize InferTrtIsContext
0:00:00.390759029 20514 0x448e2d0 WARN nvinferserver gstnvinferserver_impl.cpp:510:start: error: Config file path: config/TripleRiding/config_infer.txt_SoFile
0:00:00.392358949 20514 0x448e2d0 WARN nvinferserver gstnvinferserver.cpp:459:gst_nvinfer_server_start: error: gstnvinferserver_impl start failed
Warning: gst-library-error-quark: Configuration file batch-size reset to: 1 (5): gstnvinferserver_impl.cpp(287): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:primary

below is the customlib i have used

custom_parse_bbox_func:“NvDsInferParseCustomNMSTLT”

custom_lib {

path:“deepstream_tao_apps/post_processor/libnvds_infercustomparser_tao.so”

}

and i see same error in the lnk