• Hardware Platform Nvidia Tesla T4 • DeepStream Version 5.1
• TensorRT Version 7.2.2.3 • NVIDIA GPU Driver Version (valid for GPU only) 460.32…03 • Issue Type( questions, new requirements, bugs) question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
use person detector as primary and face detector as separate trt based in gst-dsexample.
Hi,
I am using 5 stream sources and my goal is to have person detector in 1-3 streams and face detection in rest sources.
I know about the back-to-back detector but that are 2 serially detection happening.
I also know deepstream instance can run with one primary detector.
What i want-
I want to have person detection in few sources and facedetetion in few , by using gst-dsexample. I applied conditions where facedetetion is required.
What i have done-
i used person detector as primary detector and face detector as separate trt based ineferencing code which is being used in gst-dsexample plugin (initializing/inferencing).
success-
I got the face boxes output inside gst-dsexample.
Issue-
I am not able to attach these faceboxes to the metadata as metadata depends on primary object_meta data.
So my points are-
Can i remove the primary object(person detector) info in gst-dsexample and attach my faceboxes data and show in output?
Can i bypass the primary inferencing in given source and can do facedetection (separate trt based in gst-dsexample code) and attach the results?
Theoretically, you can if the person bbox is needed for the following modules to implement parallel calculation. But there is another way to transfer face bbox with use metadata.
No. All processing is done with batch but not individual stream or frame. If you don’t want to inference on any stream, you don’t need to add it to the nvstreammux pipeline. E.G. as your description, you can use two pipelines. One pipeline for the 3 streams with person detection, another pipeline to handle the 2 streams with facedetection.
Thanks for the suggestions. But there is another way to transfer face bbox with use metadata.
I did that with attach_metadata API. But i am not able to remove the person boxes. How can i do that.
you can use two pipelines. One pipeline for the 3 streams with person detection, another pipeline to handle the 2 streams with facedetection.
That’s seems promising and will reduce load. Can you elaborate how can i do that? Does i need to update deepstream-app.c ? But that will be hardcoded for specific streams i guess. i wanted to do it dynamically like from config file.
It depends on how you will use the person bboxes and face bboxes. There is object level user meta in NvDsObjectMeta, you can use nvds_add_user_meta_to_obj() to add new user meta to object.
We have many pipeline samples, e.g. deepstream-test3, it can support different number of streams dynamically too. For deepstream-app, you can run with “deepstream-app -c config1.txt -c config2.txt” to run two pipelines.
It depends on how you will use the person bboxes and face bboxes. There is object level user meta in NvDsObjectMeta, you can use nvds_add_user_meta_to_obj() to add new user meta to object.
I can add custom data with nvds_add_user_meta_to_obj, i am asking how can i remove the default bboxes of primary detector. Also tracker is default for primary detections, can i use tracker again for faceboxes in gst plugin?
We have many pipeline samples, e.g. deepstream-test3, it can support different number of streams dynamically too. For deepstream-app, you can run with “deepstream-app -c config1.txt -c config2.txt” to run two pipelines.
This point is great but i doubt about using gst-dsexample plugin as common for both. After processing, i am using the output and add some custom information and sending it over amqp protocol, with two different config files i how can i use same gst plugin. As i want to send information whenever face is detected or person is detected.
I can add custom data with nvds_add_user_meta_to_obj, i am asking how can i remove the default bboxes of primary detector. Also tracker is default for primary detections, can i use tracker again for faceboxes in gst plugin?
Do you never need the person object any more? The face object and person object does not conflict if you assign differnt object type, they can exist in the same time. You don’t need to release person object. If you insist to do this, the object meta can be removed by nvds_remove_obj_meta_from_frame() function.
And i just ran two streams with 2 different config files. It ran well.
As this is a single deepstream instance so can i see output of both in a single window ? like same tiled-display? currently 2 windows are being showed.
and in gst-dsexample plugin, there are two different logic is running for facedetection and person detection.
If i use gst plugin in both the config files for different logics, one conflicts with other because there is only one “libnvdsgst_dsexample.so” in /opt/nvidia/deepstream/lib folder.
Do i need to make another custom plugin for another config file ? like “libnvdsgst_dsexample2.so”?
The object types are different, you can identify different objects with the same code.
It is also possible to have another new dsexample2 if you know the basic development skills of developing gstreamer plugin. Plugin Writer's Guide