Please provide complete information as applicable to your setup.
Hardware Platform (Jetson / GPU) Jetson Xaxier • DeepStream Version 6.1.0 • JetPack Version (valid for Jetson only) • TensorRT Version 8.4 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) Memory overload • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
My question is by doing so is the original image cropped and fed to the sgie or does the application preserve the original image size and conduct inference only inside the bbox?
What are your exact user case? Maybe you can refer our source code directly: sources\gst-plugins\gst-nvdspreprocess or sources\gst-plugins\gst-nvinfer. There is a detailed processing flow in the code. Thanks
We have a very classical use case. We want to detect workers in a construction site then crop the worker image within the bbox and feed it to the sgie to detect if they are wearing the right protection gear like (Helmet and Reflective vest).
We previously had a pipeline with only one GIE but the model failed to detect the gear when the worker are far away and thus very small.
You can change the primary object bbox parameters (NvDsObjectMeta rect_params) or create new objects (NvDsObjectMeta) with the required bbox and use them as primary objects.
This seems to be working but how can I make sure the SGIE is operating on the cropped bbox image only? One way to do this is to save the image fed into the SGIE but how can I do that?
hi, I have tried to use gie-unique-id and operate-on-gie-id params in my PGIE and SGIE respectively but when I run the program i got the following error .
It means the scaled ratio between the origin and dest video is too large. Could you attach your demo app(models, videos, code, config files) to us? You can message all of that to me.
You need to modify the open source code if you use the deepstream 6.1, please refer to the link below: https://github.com/zhouyuchong/face-recognition-deepstream/issues/24
If you update the deepstream version to 6.2, you can just set the crop-objects-to-roi-boundary=1 in the config file.
No. Because the engine file can only be used in your environment. If I want to use it in my jetson env, I need to use the origin model file to generate engine file again.
Based on previous experience, it should be the issue of bbox crossing the boundary.
You can also try to set the scaling-compute-hw=1 in your config file to use gpu instead of VIC.