Not able to run PeopleSegNet(MaskRCNN) NVIDIA TAO(.etlt) model with deep stream

Hi, I was trying to run PeopleSegNet(MaskRCNN) TAO(.etlt) model in deepstream. config file which is provided here(deepstream_tao_apps/configs/peopleSegNet_tao at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub) and have build and replaced “*” as per given in this doc(deepstream_tao_apps/TRT-OSS/x86 at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub). I was unable to run the model, please find the below screenshot for the error message. Could you please help us resolve this issue.

DeepStream config file: deepstream_tao_apps/pgie_peopleSegNet_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub
Model: PeopleSegNet TAO model which is downloaded from
Labels file : deepstream_tao_apps/peopleSegNet_labels.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub
Hardware Platofrm: Tesla T4
DeepStream Version: 6.0
TensorRT Version : 8.0.1
CUDA Version: 11.3


Can you try another config file deepstream_tao_apps/pgie_peopleSegNetv2_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub ?

Hey Morganh, Thanks for the suggestion, I was able to create trt engine file using that v2 config and its model, but I am not able to parse the segmentation mask output, which was suppose to be in pyds.NvDsObjectMeta.mask_params, I was trying to access it, I am not able to do so, could you please help us on how to parse the output of the model and how do I get the mask info in the ObjectMeta. please find the below error message.


For postprocessing of maskrcnn, you can refer to tao-toolkit-triton-apps/ at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps · GitHub


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.