Object is not getting counted when crossing the line slowly

Hello,

I’m having an issue where the object is getting tracked and is crossing the line but not getting counted. It happens only when the object is moving slowly (count gets updated when the object moves fast).

What parameters affect this and how can I fix this issue?

I’m on DeepStream version: 6.2. I’m using the following configurations:

[tracker]
tracker-width=640
tracker-height=384
gpu-id=0
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=config_tracker_NvDCF_perf.yml
#enable-past-frame=1 ## tried with past frame enabled as well
enable-batch-process=1
display-tracking-id=1

[line-crossing-stream-0]
enable=1
#Label;direction;lc
line-crossing-Entry=350;160;350;140; 220;150;440;150;
class-id=1

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello @fanzh, thanks for your reply.

Hardware Platform (Jetson / GPU): Jetson Xavier NX
DeepStream Version: 6.2
JetPack Version (valid for Jetson only): 5.1
TensorRT Version: 8.5.2.2
NVIDIA GPU Driver Version (valid for GPU only): -
Issue Type( questions, new requirements, bugs): bugs
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing):

The project is based on deepstream-nvdsanalytics (from the sample deepstream_python_apps). We are using a custom trained etlt model with the label files changed otherwise the configuration is generally not changed. The issue occurs only when the objects (in our case bags) are moving slowly across the line. The detection and tracking seems to be happening fine as we can see the bounding box on the UI but the counter doesn’t when the object crosses the tracker line. Following are some of the specific configurations inside respective files:

dsnvanalytics_pgie_config.txt

[property]
#gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=/home/nvidia/Documents/bag_counting_use_case/bag_counting/labels.txt
model-engine-file=/home/nvidia/Documents/bag_counting_use_case/bag_counting/ssd_resnet18_epoch_100.etlt_b30_gpu0_int8.engine
tlt-encoded-model=/home/nvidia/Documents/bag_counting_use_case/bag_counting/ssd_resnet18_epoch_100.etlt
int8-calib-file=/home/nvidia/Documents/bag_counting_use_case/bag_counting/cal.bin
tlt-model-key=nvidia_tlt
infer-dims=3;300;300
uff-input-order=0
maintain-aspect-ratio=1
#scaling-compute-hw = 1
uff-input-blob-name=Input

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1
num-detected-classes=5
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.2/lib/libnvds_infercustomparser.so

[class-attrs-all]
#pre-cluster-threshold=0.2
#eps=0.2
#group-threshold=1
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

dsnvanalytics_tracker_config.txt

[tracker]
tracker-width=640
tracker-height=384
gpu-id=0
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=config_tracker_NvDCF_perf.yml
enable-past-frame=1
enable-batch-process=1
display-tracking-id=1

config_tracker_NvDCF_perf.yml - we have tried with other configs as well with default values

BaseConfig:
minDetectorConfidence: 0.430 # If the confidence of a detector bbox is lower than this, then it won’t be considered for tracking

TargetManagement:
enableBboxUnClipping: 1 # In case the bbox is likely to be clipped by image border, unclip bbox
preserveStreamUpdateOrder: 1 # When assigning new target ids, preserve input streams’ order to keep target ids in a deterministic order over multuple runs
maxTargetsPerStream: 50 # Max number of targets to track per stream. Recommended to set >10. Note: this value should account for the targets being tracked in shadow mode as well. Max value depends on the GPU memory capacity

[Creation & Termination Policy]

minIouDiff4NewTarget: 0.4 # If the IOU between the newly detected object and any of the existing targets is higher than this threshold, this newly detected object will be discarded.
minTrackerConfidence: 0.4009 # If the confidence of an object tracker is lower than this on the fly, then it will be tracked in shadow mode. Valid Range: [0.0, 1.0]
probationAge: 2 # If the target’s age exceeds this, the target will be considered to be valid.
maxShadowTrackingAge: 51 # Max length of shadow tracking. If the shadowTrackingAge exceeds this limit, the tracker will be terminated.
earlyTerminationAge: 1 # If the shadowTrackingAge reaches this threshold while in TENTATIVE period, the target will be terminated prematurely.

TrajectoryManagement:
useUniqueID: 0 # Use 64-bit long Unique ID when assignining tracker ID.

DataAssociator:
dataAssociatorType: 0 # the type of data associator among { DEFAULT= 0 }
associationMatcherType: 1 # the type of matching algorithm among { GREEDY=0, CASCADED=1 }
checkClassMatch: 1 # If checked, only the same-class objects are associated with each other. Default: true

[Association Metric: Thresholds for valid candidates]

minMatchingScore4Overall: 0.4290 # Min total score
minMatchingScore4SizeSimilarity: 0.3627 # Min bbox size similarity score
minMatchingScore4Iou: 0.1 # Min IOU score
minMatchingScore4VisualSimilarity: 0.5356 # Min visual similarity score

[Association Metric: Weights]

matchingScoreWeight4VisualSimilarity: 0.3370 # Weight for the visual similarity (in terms of correlation response ratio)
matchingScoreWeight4SizeSimilarity: 0.4354 # Weight for the Size-similarity score
matchingScoreWeight4Iou: 0.3656 # Weight for the IOU score

[Association Metric: Tentative detections] only uses iou similarity for tentative detections

tentativeDetectorConfidence: 0.2008 # If a detection’s confidence is lower than this but higher than minDetectorConfidence, then it’s considered as a tentative detection
minMatchingScore4TentativeIou: 0.5296 # Min iou threshold to match targets and tentative detection

StateEstimator:
stateEstimatorType: 1 # the type of state estimator among { DUMMY=0, SIMPLE=1, REGULAR=2 }

[Dynamics Modeling]

processNoiseVar4Loc: 1.5110 # Process noise variance for bbox center
processNoiseVar4Size: 1.3159 # Process noise variance for bbox size
processNoiseVar4Vel: 0.0300 # Process noise variance for velocity
measurementNoiseVar4Detector: 3.0283 # Measurement noise variance for detector’s detection
measurementNoiseVar4Tracker: 8.1505 # Measurement noise variance for tracker’s localization

VisualTracker:
visualTrackerType: 1 # the type of visual tracker among { DUMMY=0, NvDCF=1 }

[NvDCF: Feature Extraction]

useColorNames: 1 # Use ColorNames feature
useHog: 0 # Use Histogram-of-Oriented-Gradient (HOG) feature
featureImgSizeLevel: 2 # Size of a feature image. Valid range: {1, 2, 3, 4, 5}, from the smallest to the largest
featureFocusOffsetFactor_y: -0.2000 # The offset for the center of hanning window relative to the feature height. The center of hanning window would move by (featureFocusOffsetFactor_y*featureMatSize.height) in vertical direction

[NvDCF: Correlation Filter]

filterLr: 0.0750 # learning rate for DCF filter in exponential moving average. Valid Range: [0.0, 1.0]
filterChannelWeightsLr: 0.1000 # learning rate for the channel weights among feature channels. Valid Range: [0.0, 1.0]
gaussianSigma: 0.7500 # Standard deviation for Gaussian for desired response when creating DCF filter [pixels]

config_nvdsanalytics.txt

[property]
enable=1
#Width height used for configuration to which below configs are configured
config-width=640
config-height=360
#osd-mode 0: Dont display any lines, rois and text

1: Display only lines, rois and static text i.e. labels

2: Display all info from 1 plus information about counts

osd-mode=2
set OSD font size that has to be displayed
display-font-size=12

[line-crossing-stream-0]
enable=1
#Label;direction;lc
line-crossing-Entry=350;180;350;150; 220;150;440;150;
#line-crossing-Exit=350;150;350;170; 220;160;440;160;
#line-crossing-Exit=789;672;1084;900;851;773;1203;732
class-id=1
#extended when 0- only counts crossing on the configured Line

1- assumes extended Line crossing counts all the crossing

extended=0
#LC modes supported:
#loose : counts all crossing without strong adherence to direction
#balanced: Strict direction adherence expected compared to mode=loose
#strict : Strict direction adherence expected compared to mode=balanced
mode=loose

could you share the video? Thanks. we will have a try. you can share with forum private email.

using the sample code deepstream-nvdsanalytics, your model and config_nvdsanalytics.txt, I can’t reproduce the issue. here is the result:
log-20230803.txt (82.6 KB)

Any further update? Is this still an issue to support? Thanks

@fanzh re-running the model with the same code actually worked (thank you for pointing that out). While comparing my config with the one in the sample code I realised the difference was in the streammux width and height property that’s set in deepstream_nvdsanalytics.py. I was using 640x360 (which is the resolution of the stream whereas in the default config its set to 1920x1080).

Out of curiosity - any idea why it did not work properly for me (slow moving objects) when I had set it to the actual resolution of the stream whereas it worked with the default values?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.