Using yolo-pose and yolo-seg together in one python app

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi,

I would like to use both yolo-seg and yolo-pose as stages before the tracker, is it possible?
what exactly I need to set in the config files in order to do that?
how do I access the mask in the prob function?
I see that both pose and seg are using obj_meta.mask_params, so how to make each use its own?

thanks

I think it is possible, you can refer to this sample.

Just configure it as back-to-back instance segmentation.

nvtracker can only track objects with bboxes. If the yolo-seg or yolo-pose can output bboxes to nvtracker, you can put them before nvtracker.

Where did you see such code?

for segmentation here:

for pose esimation here:

DeepStream uses Metadata in DeepStream to store and transfer information through the elements after nvstreammux plugin in the pipeline. You can get the meta data in the pad probe functions. There are lots of DeepStream sample applications, almost every sample application has the sample code of getting the batch meta, frame meta, object meta,…

The python samples are also available: NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications (github.com)

Thank you for your reply, the problem is as I showed in my last reply it seems that both pose estimator and segmenter get their data from the same object: obj_meta.mask_params.get_mask_array(), since I need both the pose estimator and segmenter output how do I get the correct data for each model?

According to the sample you refers to. Both of the two models are instance segmentation models while the NvDsObjectMeta supports only one mask array. The suggestion is to change one of the model to the “unknown type” and set the mask array in the customized user meta NvDsUserMeta in the NvDsObjectMeta. Then the two mask arrays are all available.

Thank you, it works.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.