What is GIE stand for in PGIE and SGIE in deepstream?

I want to know why we use GIE in deep-stream ? What is its significance ?
I tried googling and reading up but I didn’t find anything.

GIE is the abbreviate of “GPU inference engine”.

Is your question to ask why we named the inference element as “GIE”?
Or your question is why the GIE module is used in DeepStream?

Is the GIE same as the engine that is made when we first make a deep-stream app.
To elaborate I made a Yolo custom deep-stream app by just coping the config files from the yolo detector folder. Then I copied the weights and cfg files and ran the deepstream-yolov3 app.txt.
The first time it ran it made an engine file called model_b01_fp16.engine

So long story short …"is that engine file same as the GIE in the apps"

1)Basically GIE are inference engines of object detectors to do inference faster
2)primary object detectors have PGIE and Secondary object detectors have SGIE

Please clarify

GIE in DeepStream equals to the nvinfer plugin instance. One nvinfer instance will use one inference model. The model file can be *.trt, *.onnx,… but finally they will be converted to *.engine by TensorRT.

  1. Inference is done with TensorRT. The engine file is generated and can be identified by TensorRT. (https://docs.nvidia.com/deeplearning/tensorrt/best-practices/index.html#initialize-engine).
  2. The inference engines can be detectors or classifiers or segmentations …, each GIE is a seperated inference model.