How images from custom post-processing can be used as input for secondary GIE?

I have two models ,primary GIE for detection and secondary GIE for identification.
I need to affine transform the result (irregular quadrilateral) detected by the primary GIE, and then crop it as the input of the secondary GIE. Can it be done?

Hardware Platform: GPU
DeepStream Version: 6.0.1
GPU: 2080ti

You can refer the link below, but the demo uses regular quadrilateral to send to sgie. You can have a try. If there is any questions, you can add comment here.
https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app

1 Like

Thank you for your reply. I also want to ask some questions:

  1. How to extract video frames to opencv, or where can I find video frames?
  2. What is the method for the secondary GIE to find the input, and where does the data need to be saved before it can be used by the secondary GIE?
  3. Video captured by mobile phone will rotate 90˚. Does GST-nvinfer support video rotation?
  1. Does deepstream support dynamic models? Secondary GIE input size is not fixed

1.You can refer the link below:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_sample_custom_gstream.html#accessing-nvbufsurface-memory-in-opencv
2.You can learn something about nvbufsurface:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/sdk-api/group__ds__nvbuf__api.html
3.You can use the nvvideoconvert to rotate it first:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvvideoconvert.html
4.What do you mean by dynamic models and secondary GIE input size(width, height, batchsize)?

My dynamic model batchsize is 1, but the size of width and height is not fixed, ranging from 64 to 1024. The size of each cropped image is different.In tensorrt, I can use context->setBindingDimensions(inputindex, dims4(1,C,H,W)) to set the dims for each run.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

You can refer from the link below, nvinfer can support explicit full dimension network. https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.