Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
6.2
• NVIDIA GPU Driver Version (valid for GPU only)
525.105.17
• Issue Type( questions, new requirements, bugs)
questions
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
My program has a primary_bin(nvinfer) before tracker, and several secondary detector/cliassfier after tracker. I got error occasionlly like this when I added more than 50 sources.
gstnvtracker: NvBufSurfTransform failed with error -3 while converting buffergstnvtracker: Failed to convert input batch.
Here’s my tracker configuration:
BaseConfig:
minDetectorConfidence: 0 # If the confidence of a detector bbox is lower than this, then it won't be considered for tracking
TargetManagement:
preserveStreamUpdateOrder: 0 # When assigning new target ids, preserve input streams' order to keep target ids in a deterministic order over multuple runs
maxTargetsPerStream: 150 # Max number of targets to track per stream. Recommended to set >10. Note: this value should account for the targets being tracked in shadow mode as well. Max value depends on the GPU memory capacity
# [Creation & Termination Policy]
minIouDiff4NewTarget: 0.5 # If the IOU between the newly detected object and any of the existing targets is higher than this threshold, this newly detected object will be discarded.
minTrackerConfidence: 0.2 # If the confidence of an object tracker is lower than this on the fly, then it will be tracked in shadow mode. Valid Range: [0.0, 1.0]
probationAge: 0 # If the target's age exceeds this, the target will be considered to be valid.
maxShadowTrackingAge: 30 # Max length of shadow tracking. If the shadowTrackingAge exceeds this limit, the tracker will be terminated.
earlyTerminationAge: 1 # If the shadowTrackingAge reaches this threshold while in TENTATIVE period, the the target will be terminated prematurely.
TrajectoryManagement:
useUniqueID: 1 # Use 64-bit long Unique ID when assignining tracker ID.
DataAssociator:
dataAssociatorType: 0 # the type of data associator among { DEFAULT= 0 }
associationMatcherType: 0 # the type of matching algorithm among { GREEDY=0, GLOBAL=1 }
checkClassMatch: 1 # If checked, only the same-class objects are associated with each other. Default: true
# Thresholds in matching scores to be considered as a valid candidate for matching
minMatchingScore4Overall: 0.8 # Min total score
minMatchingScore4SizeSimilarity: 0.6 # Min bbox size similarity score
minMatchingScore4Iou: 0.0 # Min IOU score
thresholdMahalanobis: 9.4877 # Max Mahalanobis distance based on Chi-square probabilities
StateEstimator:
stateEstimatorType: 2 # the type of state estimator among { DUMMY=0, SIMPLE=1, REGULAR=2 }
# [Dynamics Modeling]
noiseWeightVar4Loc: 0.05 # weight of process and measurement noise for bbox center; if set, location noise will be proportional to box height
noiseWeightVar4Vel: 0.00625 # weight of process and measurement noise for velocity; if set, velocity noise will be proportional to box height
useAspectRatio: 1 # use aspect ratio in Kalman filter's observation
ReID:
reidType: 1 # the type of reid among { DUMMY=0, DEEP=1 }
batchSize: 128 # batch size of reid network
workspaceSize: 1000 # workspace size to be used by reid engine, in MB
reidFeatureSize: 128 # size of reid feature
reidHistorySize: 100 # max number of reid features kept for one object
inferDims: [128, 64, 3] # reid network input dimension CHW or HWC based on inputOrder
inputOrder: 1 # reid network input order among { NCHW=0, NHWC=1 }
colorFormat: 0 # reid network input color format among {RGB=0, BGR=1 }
networkMode: 1 # reid network inference precision mode among {fp32=0, fp16=1, int8=2 }
offsets: [2.1179039, 2.03571, 1.80444] # array of values to be subtracted from each input channel, with length equal to number of channels
netScaleFactor: 0.0174291938997821 # scaling factor for reid network input after substracting offsets
inputBlobName: "input" # reid network input layer name
outputBlobName: "output" # reid network output layer name
#uffFile: "/opt/nvidia/deepstream/deepstream/samples/models/Tracker/mars-small128.uff" # absolute path to reid network uff model
modelEngineFile: "myenginefile" # engine file path
keepAspc: 0 # whether to keep aspc ratio when resizing input objects for reid
and other configurations set by code:
g_object_set (G_OBJECT (m_tracker), "tracker-width", 640,
"tracker-height", 384,
"gpu-id", 0,
"ll-lib-file", "/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so",
NULL);
g_object_set (G_OBJECT (m_tracker), "enable-batch-process",
1, NULL);
g_object_set (G_OBJECT (m_tracker), "enable-past-frame",
1, NULL);
g_object_set (G_OBJECT (m_tracker), "display-tracking-id",
1, NULL);
g_object_set (G_OBJECT (m_tracker), "compute-hw",1,NULL);
I set “crop-objects-to-roi-boundary: 1” with primary_bin, so the object rects should be right.What else should I do?
Btw, which option should I set in tracker if I want to skip some specific class ids from the detector?