Separating Tracker Outputs for Multiple Camera Streams

person detection → tracking → face detection → classifier 1 → classifier 2

Hi, I created this pipeline, and it works correctly with a single input.
However, when I tried using multiple inputs, the pipeline runs, but the tracker counts people across both inputs as if they were from a single camera. For example, if there are 2 people in the first camera and 3 people in the second camera, the tracker counts all 5 people as if they were from the same camera.

How can I separate the tracker for each stream?

(deepstream 6.3)

Can you have a try with: useUniqueID: 0? Please refer here for more details: Gst-nvtracker — DeepStream documentation

The same problem. Here is my custom graph; do you have any suggestions to resolve this issue?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2
deepstream 6.3

Can you provide the graph and configuration files?

MyGraph5 (copy).txt (13.9 KB)

Can you provide the config file used in your graph?

components:
- name: Object Tracker31
  parameters:
    ll-config-file: /home/anavid-server/models_tst_amine/traking/model_tracker/config_tracker_DeepSORT.yml

%YAML:1.0

BaseConfig:
minDetectorConfidence: 0 # If the confidence of a detector bbox is lower than this, then it won’t be considered for tracking

TargetManagement:
maxTargetsPerStream: 150 # Max number of targets to track per stream. Recommended to set >10. Note: this value should account for the targets being tracked in shadow mode as well. Max value depends on the GPU memory capacity

[Creation & Termination Policy]

minIouDiff4NewTarget: 0.5 # If the IOU between the newly detected object and any of the existing targets is higher than this threshold, this newly detected object will be discarded.
minTrackerConfidence: 0.2 # If the confidence of an object tracker is lower than this on the fly, then it will be tracked in shadow mode. Valid Range: [0.0, 1.0]
probationAge: 3 # If the target’s age exceeds this, the target will be considered to be valid.
maxShadowTrackingAge: 150 # Max length of shadow tracking. If the shadowTrackingAge exceeds this limit, the tracker will be terminated.
earlyTerminationAge: 1 # If the shadowTrackingAge reaches this threshold while in TENTATIVE period, the the target will be terminated prematurely.

TrajectoryManagement:
useUniqueID: 0 # Use 64-bit long Unique ID when assignining tracker ID.
enableReAssoc: 1 # Enable Re-Assoc

[Re-Assoc: Motion-based]

minTrajectoryLength4Projection: 20 # min trajectory length required to make projected trajectory
prepLength4TrajectoryProjection: 10 # the length of the trajectory during which the state estimator is updated to make projections
trajectoryProjectionLength: 300 # the length of the projected trajectory

[Re-Assoc: Trajectory Similarity]

minTrackletMatchingScore: 0.2 # min tracklet similarity score for matching in terms of average IOU between tracklets
maxAngle4TrackletMatching: 180 # max angle difference for tracklet matching [degree]
minSpeedSimilarity4TrackletMatching: 0.4 # min speed similarity for tracklet matching
minBboxSizeSimilarity4TrackletMatching: 0.2 # min bbox size similarity for tracklet matching
maxTrackletMatchingTimeSearchRange: 500 # the search space in time for max tracklet similarity

DataAssociator:
dataAssociatorType: 0 # the type of data associator among { DEFAULT= 0 }
associationMatcherType: 0 # the type of matching algorithm among { GREEDY=0, GLOBAL=1 }
checkClassMatch: 1 # If checked, only the same-class objects are associated with each other. Default: true

Thresholds in matching scores to be considered as a valid candidate for matching

minMatchingScore4Overall: 0.0 # Min total score
minMatchingScore4SizeSimilarity: 0.2 # Min bbox size similarity score
minMatchingScore4Iou: 0.0 # Min IOU score
thresholdMahalanobis: 9.4877 # Max Mahalanobis distance based on Chi-square probabilities

StateEstimator:
stateEstimatorType: 2 # the type of state estimator among { DUMMY=0, SIMPLE=1, REGULAR=2 }

[Dynamics Modeling]

noiseWeightVar4Loc: 0.05 # weight of process and measurement noise for bbox center; if set, location noise will be proportional to box height
noiseWeightVar4Vel: 0.00625 # weight of process and measurement noise for velocity; if set, velocity noise will be proportional to box height
useAspectRatio: 1 # use aspect ratio in Kalman filter’s observation

ReID:
reidType: 1 # the type of reid among { DUMMY=0, DEEP=1 }
batchSize: 100 # batch size of reid network
workspaceSize: 1000 # workspace size to be used by reid engine, in MB
reidFeatureSize: 128 # size of reid feature
reidHistorySize: 100 # max number of reid features kept for one object
inferDims: [128, 64, 3] # reid network input dimension CHW or HWC based on inputOrder
inputOrder: 1 # reid network input order among { NCHW=0, NHWC=1 }
colorFormat: 0 # reid network input color format among {RGB=0, BGR=1 }
networkMode: 0 # reid network inferenc e precision mode among {fp32=0, fp16=1, int8=2 }
offsets: [0.0, 0.0, 0.0] # array of values to be subtracted from each input channel, with length equal to number of channels
netScaleFactor: 1.0 # # scaling factor for reid network input after substracting offsets
inputBlobName: “images” # reid network input layer name
outputBlobName: “features” # reid network output layer name
uffFile: “model_tracker/mars-small128.uff” # ABSOLUTE path to reid network uff model
modelEngineFile: “/home/anavid-server/models_tst_amine/traking/model_tracker/mars-small128.uff_b100_gpu0_fp32.engine” # engine file path
keepAspc: 1 # whether to keep aspc ratio when resizing input objects for reid

if I have two video streams, with 3 people in the first stream and 2 people in the second, does the tracker assign unique IDs from 0 to 4 for all 5 individuals.

When I use useUniqueID: 0, the tracker assigns IDs from 0 to 4 across all streams. However, my requirement is to have separate ID ranges for each stream, where each stream’s IDs start from 0.

On the other hand, when using useUniqueID: 1, the tracker generates large random numbers for IDs, which is not the desired behavior in my case.

The incrementation of the lower 32-bit of the target ID is done across the whole video streams in the same NvMultiObjectTracker library instantiation. You can check below which in the nvtracker doc here: Gst-nvtracker — DeepStream documentation

“Note that the incrementation of the lower 32-bit of the target ID is done across the whole video streams in the same NvMultiObjectTracker library instantiation. Thus, even if the unique ID generation is disabled, the tracker IDs will be unique for the same pipeline run. If the unique ID generation is disabled, and if there are three objects for Stream 1 and two objects for Stream 2, for example, the target IDs will be assigned from 0 to 4 (instead of 0 to 2 for Stream 1 and 0 to 1 for Stream 2) as long as the two streams are being processed by the same library instantiation.”

Thank you for sharing this. I have carefully read your documentation. However, I am using useUniqueID: 1 to separate the tracking operation between streams. Nothing changes except that the ID becomes a large number. My current requirement is to separate the tracking operation between streams while ensuring that the ID in each stream starts from 0.

Could you please suggest the parameter responsible for this need?

The incrementation of the lower 32-bit of the target ID is done across the whole video streams in the same NvMultiObjectTracker library instantiation. Why you need each stream starts from 0? If you run each stream in different pipeline, the tracker id of each pipeline will starts from 0.