I am reading the “deepstream-test2” application provided with deepstreamer SDK (>4)
In Which they are creating the GST pipeline as follows:
filesrc → decode → nvstreammux → nvinfer (primary detector) → nvtracker → nvinfer (secondary classifier) → nvosd → renderer
- I want to know, how nvinfer(primary) and nvtracker works when frame comes?
- Does nvinfer(primary) every time detects the objects in the frame ? If yes then whats the use of nvtracker?