How to use PeopleSemSegNet model after using normal object detection model

• Hardware Platform (Jetson / GPU) Jetson AGX Orin
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only) Unknown
• TensorRT Version 8.4
• Issue Type( questions, new requirements, bugs) questions

Hi, I’m trying to mask people in a video and now I’m having issues with its accuracy. Model which I use (peoplesemsegnet) doesn’t mask all the people.

However, some object detection models can detect all the people in the image and draw valid bounding boxes.
So I want to use these object detection models first, and use PeopleSemSegNet after the detection. That is, I want to specify areas in which a person is standing so that PeopleSemSegNet is more likely to mask people correctly.

Therefore I have this question: Is it possible to specify ROIs when using PeopleSemSegNet with nvinfer?

Give me some hints. Best regards.

Please use PeopleSemSegNet as secondary GIE. Gst-nvinfer — DeepStream 6.1 Release documentation

We already have a lot of SGIE samples. E.G. deepstream-test2
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_C_Sample_Apps.html

Thanks. I’ll try it and reply results then.

I referred deepstream-test2 in deepstream_python_apps because I use Python Bindings. And I was able to use SSD(as PGIE) and PeopleSemSegNet(as SGIE) in the pipeline. (I added nvtracker, too.)

But I don’t know how to set ROIs in secondary GIE. Could you give me additional resources which help me to achieve my goal?

Is the ROI fixed?

No.
I have completely no idea how to set ROIs or cutomize settings related to ROIs.

It seems nvdspreprocess plugin satisfies my requirements. I’ll try the plugin.

ROIs can not be set dynamically.

Oh, I didn’t know that. Maybe, I have to implement the feature by myself…