How inference and tracker is working in deepstreamer

Hi,
I am reading the “deepstream-test2” application provided with deepstreamer SDK (>4)

In Which they are creating the GST pipeline as follows:

filesrc → decode → nvstreammux → nvinfer (primary detector) → nvtracker → nvinfer (secondary classifier) → nvosd → renderer

Questions:

  1. I want to know, how nvinfer(primary) and nvtracker works when frame comes?
  2. Does nvinfer(primary) every time detects the objects in the frame ? If yes then whats the use of nvtracker?

Hi,

Here are some document and blog which can give you some idea about Deepstream pipeline:
https://developer.nvidia.com/deepstream-sdk
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html

1.
When a frame comes, it will first go through a pre-processing and is send to the nvinfer for the object detection.
After primary detector, the detected bounding box is available.
nvtracker is used for tracking this bounding box with a vision based approach for decreasing the complexity.

2.
There is a parameter called interval, which control the interval between each inference.
User can set the value to decide how often they want to apply the detection, and nvtracker will be used to generate the bbox of non-detected frame.

Thanks.