Hi,
We encounter some difficulties integrating a custom mask r-cnn model into DeepStream 6.0
• Hardware Platform (Jetson / GPU) Jetson Xavier AGX 16Gb
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1.6
• NVIDIA GPU Driver Version (valid for GPU only) L4T 32.6.1
• Issue Type( questions, new requirements, bugs)
I built a custom pipeline which takes a h264 video file as input, performs the inference and then records each mask corresponding to the different frames. It works with a custom FCN model, but I want to replace it by a custom Mask R-CNN one. This one has been trained with TAO-Toolkit and then exported to engine format from the custom DeepStream Pipeline. It was trained from a resnet50 backbone. When exporting we observe this :
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT Input 3x832x1344
1 OUTPUT kFLOAT generate_detections 10x6
2 OUTPUT kFLOAT mask_fcn_logits/BiasAdd 10x4x28x28
I followed this tutorial : MaskRCNN — TAO Toolkit 3.22.05 documentation , having downloaded TensorRT OSS, built the necessary libraries…
With the custom pipeline, I am able to retrieve bbox corresponding to detected objects during inference from GST buffer with a probe function, but don’t succeed to retrieve the corresponding masks using mask_params from NvDsObjectMeta, and obtained this error:
TypeError: Unable to convert function return value to Python type! The signature was
(self: pyds.NvDsObjectMeta) -> _NvOSD_MaskParams
I also tried with pyds.NvDsInferSegmentationMeta.cast and obtained this error:
segmeta = pyds.NvDsInferSegmentationMeta.cast(user_data.user_meta_data)
AttributeError: type object 'pyds.NvDsInferSegmentationMeta' has no attribute 'cast'
To narrow down, I follow this : GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications and tried to run the deepstream-segmentation.py. But I get the same error as the last one
Here is the custom python pipeline with use of NvDsObjectMeta mask_params : custom_pipeline.py (5.4 KB)
Here is the nvinfer config file config_v1.txt (1.1 KB)
Can you help us to solve this issue ? Do you have any idea of the source of the problem ?
Looking forward to your feedback