DeespSort is not working in re-id

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson Nano 4GB
• DeepStream Version
6.0
• JetPack Version (valid for Jetson only)
4.6
• TensorRT Version
8.0.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
question

When I try to run the code with this config in deepsort yaml :


%YAML:1.0
################################################################################
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

BaseConfig:
  minDetectorConfidence: 0   # If the confidence of a detector bbox is lower than this, then it won't be considered for tracking

TargetManagement:
  maxTargetsPerStream: 150  # Max number of targets to track per stream. Recommended to set >10. Note: this value should account for the targets being tracked in shadow mode as well. Max value depends on the GPU memory capacity

  # [Creation & Termination Policy]
  minIouDiff4NewTarget: 0.5   # If the IOU between the newly detected object and any of the existing targets is higher than this threshold, this newly detected object will be discarded.
  minTrackerConfidence: 0.2   # If the confidence of an object tracker is lower than this on the fly, then it will be tracked in shadow mode. Valid Range: [0.0, 1.0]
  probationAge: 5 # If the target's age exceeds this, the target will be considered to be valid.
  maxShadowTrackingAge: 30   # Max length of shadow tracking. If the shadowTrackingAge exceeds this limit, the tracker will be terminated.
  earlyTerminationAge: 1   # If the shadowTrackingAge reaches this threshold while in TENTATIVE period, the the target will be terminated prematurely.

TrajectoryManagement:
  useUniqueID: 0   # Use 64-bit long Unique ID when assignining tracker ID.

DataAssociator:
  dataAssociatorType: 0 # the type of data associator among { DEFAULT= 0 }
  associationMatcherType: 0 # the type of matching algorithm among { GREEDY=0, GLOBAL=1 }
  checkClassMatch: 1  # If checked, only the same-class objects are associated with each other. Default: true

  # Thresholds in matching scores to be considered as a valid candidate for matching
  minMatchingScore4Overall: 0.8   # Min total score
  minMatchingScore4SizeSimilarity: 0.6  # Min bbox size similarity score
  minMatchingScore4Iou: 0.0       # Min IOU score
  thresholdMahalanobis: 9.4877    # Max Mahalanobis distance based on Chi-square probabilities

StateEstimator:
  stateEstimatorType: 2  # the type of state estimator among { DUMMY=0, SIMPLE=1, REGULAR=2 }

  # [Dynamics Modeling]
  noiseWeightVar4Loc: 0.05  # weight of process and measurement noise for bbox center; if set, location noise will be proportional to box height
  noiseWeightVar4Vel: 0.00625  # weight of process and measurement noise for velocity; if set, velocity noise will be proportional to box height
  useAspectRatio: 1 # use aspect ratio in Kalman filter's observation

ReID:
  reidType: 1 # the type of reid among { DUMMY=0, DEEP=1 }
  batchSize: 100 # batch size of reid network
  workspaceSize: 1000 # workspace size to be used by reid engine, in MB
  reidFeatureSize: 128 # size of reid feature
  reidHistorySize: 100 # max number of reid features kept for one object
  inferDims: [128, 64, 3] # reid network input dimension CHW or HWC based on inputOrder
  inputOrder: 1 # reid network input order among { NCHW=0, NHWC=1 }
  colorFormat: 0 # reid network input color format among {RGB=0, BGR=1 }
  networkMode: 0 # reid network inference precision mode among {fp32=0, fp16=1, int8=2 }
  offsets: [0.0, 0.0, 0.0]  # array of values to be subtracted from each input channel, with length equal to number of channels
  netScaleFactor: 1.0 # # scaling factor for reid network input after substracting offsets
  inputBlobName: "images" # reid network input layer name
  outputBlobName: "features" # reid network output layer name
  uffFile: "/opt/nvidia/deepstream/deepstream/samples/models/Tracker/mars-small128.uff" # absolute path to reid network uff model
  modelEngineFile: "/opt/nvidia/deepstream/deepstream/samples/models/Tracker/mars-small128.uff_b100_gpu0_fp32.engine" # engine file path
  keepAspc: 1 # whether to keep aspc ratio when resizing input objects for reid

I don’t see any tracking in person, when someone goes out to the frame and reenter again person id changes always.

here is the terminal o/p:

$ python3 rtsp_stream.py 
Creating Pipeline 
 
Adding elements to Pipeline 

Creating source bin
Linking elements in the Pipeline 

Starting pipeline 


Using winsys: x11 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Loading TRT Engine for tracker ReID...
[NvMultiObjectTracker] Loading Complete!
[NvMultiObjectTracker] Initialized
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
Deserialize yoloLayer plugin: yolo
0:00:11.659466258 11049     0x32b4f2a0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/tosso/Documents/tosso_isg/utils/models/trt_engine/yolov7-pre.engine
INFO: [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT data            3x640x640       
1   OUTPUT kFLOAT num_detections  1               
2   OUTPUT kFLOAT detection_boxes 25200x4         
3   OUTPUT kFLOAT detection_scores 25200           
4   OUTPUT kFLOAT detection_classes 25200           

0:00:11.661740319 11049     0x32b4f2a0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/tosso/Documents/tosso_isg/utils/models/trt_engine/yolov7-pre.engine
0:00:11.702871959 11049     0x32b4f2a0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:utils/config_yolov7.txt sucessfully
Decodebin child added: source 

Decodebin child added: decodebin0 

Decodebin child added: rtph264depay0 

Decodebin child added: h264parse0 

Decodebin child added: decodebin1 

Decodebin child added: rtppcmadepay0 

Decodebin child added: capsfilter0 

Decodebin child added: alawdec0 

In cb_newpad

gstname= audio/x-raw
Decodebin child added: nvv4l2decoder0 

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7fabec40a8 (GstCapsFeatures at 0x7edc015fa0)>
Adjusting muxer's batch push timeout based on FPS of fastest source to 66666
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.

this is the some part of the main code :

    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    tracker.set_property('tracker-width', 640)
    tracker.set_property('tracker-height', 384)
    tracker.set_property('gpu_id', 0)
    tracker.set_property('ll-lib-file', '/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so')
    tracker.set_property('ll-config-file', 'utils/trackers/config_tracker_DeepSORT.yml')
    tracker.set_property('enable_batch_process', 1)
    tracker.set_property('enable_past_frame', 0)

here you can see that there is not error, code is running but not in the right way, how can I solve it and I also wonder is there any re-id model that I can use to identify persons (pb format could be nice)

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

DeepStream implements re-association in latest version 6.2/6.2. Unfortunately, Jetson Nano can’t upgrade to DeepStream 6.2/6.3. Jetson Orin/Xavier can support DeepStream 6.2/6.3.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.