Tao maskrcnn inference fails


I am trying to run maskrcnn preatrained peoplenet model from nvidia model zoo which can be found here PeopleSegNet | NVIDIA NGC.
I converted it to tensorrt engine with deepstream_tao_apps/pgie_peopleSegNetv2_tao_config.yml at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub config file. I used deepstream sdk to convert it to tensorrt engine and I am trying to use deepstream for inference as well. But I am facing some issues while trying to run inference. Config file has function name and library path mentioned and it basically points to deepstream_tao_apps/post_processor at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub lib. I built it successfully and made sure it has the function NvDsInferParseCustomMrcnnTLTV2 in it and I it has it.
For inference, deepstream is failing to create nvds infer context because it cannot find the function pointer.

INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT Input           3x576x960       
1   OUTPUT kFLOAT generate_detections 100x6           
2   OUTPUT kFLOAT mask_fcn_logits/BiasAdd 100x2x28x28     

INFO [1657147952.154466] NVDSLogger Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /media/saumil/extra/activml/models/segnet/model.step-600000_trt825_b2n10.etlt_b1_gpu0_fp16.engine
ERROR [1657147952.154703] NVDSLogger Error in NvDsInferContextImpl::initResource() <nvdsinfer_context_impl.cpp:858> [UID = 1]: InstanceSegment-postprocessor failed to init resource because dlsym failed to get func ���W pointer
ERROR: nvdsinfer_context_impl.cpp:1066 Infer Context failed to initialize post-processing resource, nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
ERROR: nvdsinfer_context_impl.cpp:1272 Infer Context prepare postprocessing resource failed., nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
ERROR [1657147952.158217] NvDsInferContext_Create failed

Am i doing something wrong here ? How do I debug this issue further ?


TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered


This looks like a TAO Toolkit related issue. We will move this post to the TAO Toolkit forum.


Sorry. This was my mistake. I was passing NvDsInferParseCustomMrcnnTLTV2 as customBBoxParseFuncName instead of customBBoxInstanceMaskParseFuncName in NvDsInferContextInitParams struct.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.