• Hardware Platform (Jetson / GPU) Jetson AGX Orin • DeepStream Version 6.1 • JetPack Version (valid for Jetson only) Unknown • TensorRT Version 8.4 • Issue Type( questions, new requirements, bugs) questions
Hi, I’m trying to mask people in a video and now I’m having issues with its accuracy. Model which I use (peoplesemsegnet) doesn’t mask all the people.
However, some object detection models can detect all the people in the image and draw valid bounding boxes.
So I want to use these object detection models first, and use PeopleSemSegNet after the detection. That is, I want to specify areas in which a person is standing so that PeopleSemSegNet is more likely to mask people correctly.
Therefore I have this question: Is it possible to specify ROIs when using PeopleSemSegNet with nvinfer?
I referred deepstream-test2 in deepstream_python_apps because I use Python Bindings. And I was able to use SSD(as PGIE) and PeopleSemSegNet(as SGIE) in the pipeline. (I added nvtracker, too.)
But I don’t know how to set ROIs in secondary GIE. Could you give me additional resources which help me to achieve my goal?