Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)
Jetson Xavier AGX
• DeepStream Version
6.1.1
• JetPack Version (valid for Jetson only)
5.0.2
HI,
I am relatively new in computer vision. I am using the python sample application with nvdsanalytics.
My pipeline consists of :
Pgie : trafficcamnet resnet 18. Etlt
Sgie vehicletypenet .etlt
…(rest like in the sample)
Now im extracting the obj_meta.confidence in the buffer method.
In some cases, the confidence is dropping on -1.0 .
It doesnt matter which cluster-mode i choose, the issue still persists. (Tried cluster-mode 1,2 and 3)
I found a topic earlier in this forum, where in DP 5.0 the issue should be fixed wothin the next patch.
What can i do now? I could build a src pad probe on pgie and ectract meta from there, but i think this would give me an performance loss and maybe problems to map/ connect metadata.
Is there any easy solution i am missing ?
CODE- Part:
while l_obj:
try:
obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
except StopIteration:
break
obj_counter[obj_meta.class_id] += 1
l_user_meta = obj_meta.obj_user_meta_list
confidence = obj_meta.confidence
tr_confidence= obj_meta.tracker_confidence# Vertrauenswürdigkeitsinformationen
label = f"Confidence: {confidence:.4f}% tr-conf: {tr_confidence:.4f}"
print(label)
getty = pyds.get_string(obj_meta.text_params.display_text)
obj_meta.text_params.display_text =getty + " "+ label
# Extract object level meta data from NvDsAnalyticsObjInfo
#############################################################
PGIE
gie-unique-id=1
tlt-model-key=tlt_encode
offsets=0.0;0.0;0.0
infer-dims=3;544;960
#force-implicit-batch-dim=1
network-type=0
network-mode=2
num-detected-classes=4
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
uff-input-blob-name=input_1
model-color-format=0
maintain-aspect-ratio=0
output-tensor-meta=0
enable-dla=1
use-dla-core=0
interval=3
process-mode=1
threshold =0.05
cluster-mode = 2
pre-cluster-threshold=0.2
group-threshold=0.5
dbscan-min-score = 0.7
SGIE:
gpu-id=0
net-scale-factor=1
offsets=103.939;116.779;123.68
tlt-model-key=tlt_encode
infer-dims=3;224;224
uff-input-blob-name=input_1
batch-size=4
process-mode=2
model-color-format=0
0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
network-type=1 #1 für classifier
num-detected-classes=6
interval=0
operate-on-gie-id =1
operate-on-class-ids=0
gie-unique-id=2
output-blob-names=predictions/Softmax
classifier-threshold=0.2
classifier-async-mode=1
[tracker-config]:
BaseConfig:
minDetectorConfidence: -1 # If the confidence of a detector bbox is lower than this, then it won’t be considered for tracking
TargetManagement:
enableBboxUnClipping: 1 # In case the bbox is likely to be clipped by image border, unclip bbox
maxTargetsPerStream: 150 # Max number of targets to track per stream. Recommended to set >10. Note: this value should account for the targets being tracked in shadow mode as well. Max value depends on the GPU memory capacity
[Creation & Termination Policy]
minIouDiff4NewTarget: 0.5 # If the IOU between the newly detected object and any of the existing targets is higher than this threshold, this newly detected object will be discarded.
minTrackerConfidence: 0.1 # If the confidence of an object tracker is lower than this on the fly, then it will be tracked in shadow mode. Valid Range: [0.0, 1.0]
probationAge: 2 # If the target’s age exceeds this, the target will be considered to be valid.
maxShadowTrackingAge: 50 # Max length of shadow tracking. If the shadowTrackingAge exceeds this limit, the tracker will be terminated.
earlyTerminationAge: 1 # If the shadowTrackingAge reaches this threshold while in TENTATIVE period, the target will be terminated prematurely.
TrajectoryManagement:
useUniqueID: 0 # Use 64-bit long Unique ID when assignining tracker ID. Default is [true]
DataAssociator:
dataAssociatorType: 0 # the type of data associator among { DEFAULT= 0 }
associationMatcherType: 0 # the type of matching algorithm among { GREEDY=0, GLOBAL=1 }
checkClassMatch: 1 # If checked, only the same-class objects are associated with each other. Default: true
[Association Metric: Thresholds for valid candidates]
minMatchingScore4Overall: 0.0 # Min total score
minMatchingScore4SizeSimilarity: 0.4 # Min bbox size similarity score
minMatchingScore4Iou: 0.0 # Min IOU score
minMatchingScore4VisualSimilarity: 0.5 # Min visual similarity score
[Association Metric: Weights]
matchingScoreWeight4VisualSimilarity: 0.6 # Weight for the visual similarity (in terms of correlation response ratio)
matchingScoreWeight4SizeSimilarity: 0.0 # Weight for the Size-similarity score
matchingScoreWeight4Iou: 0.4 # Weight for the IOU score
StateEstimator:
stateEstimatorType: 1 # the type of state estimator among { DUMMY=0, SIMPLE=1, REGULAR=2 }
[Dynamics Modeling]
processNoiseVar4Loc: 2.0 # Process noise variance for bbox center
processNoiseVar4Size: 1.0 # Process noise variance for bbox size
processNoiseVar4Vel: 0.1 # Process noise variance for velocity
measurementNoiseVar4Detector: 4.0 # Measurement noise variance for detector’s detection
measurementNoiseVar4Tracker: 16.0 # Measurement noise variance for tracker’s localization
VisualTracker:
visualTrackerType: 1 # the type of visual tracker among { DUMMY=0, NvDCF=1 }
outputs:
case 1:
#######################################
deepstream | Frame Number= 119
deepstream | 5 objects :
deepstream | Confidence: 0.4648% tr-conf: 0.8164
deepstream | Confidence: 0.2081% tr-conf: 0.7645
deepstream | Confidence: 0.2432% tr-conf: 0.7071
deepstream | Confidence: 0.3403% tr-conf: 0.8785
deepstream | Confidence: 0.3550% tr-conf: 0.6047
#########################################
case 2:
#########################################
deepstream | Frame Number= 120
deepstream | 5 objects :
deepstream | Confidence: -0.1000% tr-conf: 0.7942
deepstream | Confidence: -0.1000% tr-conf: 0.7517
deepstream | Confidence: -0.1000% tr-conf: 0.6100
deepstream | Confidence: -0.1000% tr-conf: 0.8599
deepstream | Confidence: -0.1000% tr-conf: 0.9654
deepstream | Confidence: -0.1000% tr-conf: 0.9482
#########################################