Delay in NvDsAnalytics Line Crossing Events

We are using DeepStream 5.1 in production for use cases like person counting, vehicle counting, and so forth. We have added a probe that will crop the image of the person/vehicle as soon as the line is crossed. We know that the line is crossed using the NvDsAnalytics plugin. Our videos are currently being recorded as 5 FPS.

Problem: Sometimes (10-15% of the time), the images are cropped at a much later point after the person/vehicle has crossed the line. By this time, the person has already moved far away and the resolution of the image crop is much smaller. We used filesink to visualize the videos and apparently, the NvDsAnalytics/OSD plugin sends the event of line counting when the person has already crossed the line and moved on further. There is a delay in the bounding box color change from red → blue. I’ve attached the tracker YML file in case that helps.

%YAML:1.0

  NvDCF:
    # [General]
    useUniqueID: 1    # Use 64-bit long Unique ID when assignining tracker ID. Default is [true]
    maxTargetsPerStream: 99 # Max number of targets to track per stream. Recommended to set >10. Note: this value should account for the targets being tracked in shadow mode as well. Max value depends on the GPU memory capacity

    # [Feature Extraction]
    useColorNames: 1     # Use ColorNames feature
    useHog: 1            # Use Histogram-of-Oriented-Gradient (HOG) feature
    useHighPrecisionFeature: 1   # Use high-precision in feature extraction. Default is [true]

    # [DCF]
    filterLr: 0.15 # learning rate for DCF filter in exponential moving average. Valid Range: [0.0, 1.0]
    filterChannelWeightsLr: 0.22 # learning rate for the channel weights among feature channels. Valid Range: [0.0, 1.0]
    gaussianSigma: 0.75 # Standard deviation for Gaussian for desired response when creating DCF filter [pixels]
    featureImgSizeLevel: 3 # Size of a feature image. Valid range: {1, 2, 3, 4, 5}, from the smallest to the largest
    SearchRegionPaddingScale: 1 # Search region size. Determines how large the search region should be scaled from the target bbox.  Valid range: {1, 2, 3}, from the smallest to the largest

    # [MOT] [False Alarm Handling]
    maxShadowTrackingAge: 15  # Max length of shadow tracking (the shadow tracking age is incremented when (1) there's detector input yet no match or (2) tracker confidence is lower than minTrackerConfidence). Once reached, the tracker will be terminated.
    probationAge: 0           # Once the tracker age (incremented at every frame) reaches this, the tracker is considered to be valid
    earlyTerminationAge: 15    # Early termination age (in terms of shadow tracking age) during the probation period. If reached during the probation period, the tracker will be terminated prematurely.

    # [Tracker Creation Policy] [Target Candidacy]
    minDetectorConfidence: 0.4  # If the confidence of a detector bbox is lower than this, then it won't be considered for tracking
    minTrackerConfidence: 0.2  # If the confidence of an object tracker is lower than this on the fly, then it will be tracked in shadow mode. Valid Range: [0.0, 1.0]
    minTargetBboxSize: 5      # If the width or height of the bbox size gets smaller than this threshold, the target will be terminated.
    minDetectorBboxVisibilityTobeTracked: 0.5  # If the detector-provided bbox's visibility (i.e., IOU with image) is lower than this, it won't be considered.
    minVisibiilty4Tracking: 0.5  # If the visibility of the tracked object (i.e., IOU with image) is lower than this, it will be terminated immediately, assuming it is going out of scene.

    # [Tracker Termination Policy]
    targetDuplicateRunInterval: 5 # The interval in which the duplicate target detection removal is carried out. A Negative value indicates indefinite interval. Unit: [frames]
    minIou4TargetDuplicate: 0.9 # If the IOU of two target bboxes are higher than this, the newer target tracker will be terminated.

    # [Data Association] Matching method
    useGlobalMatching: 0   # If true, enable a global matching algorithm (i.e., Hungarian method). Otherwise, a greedy algorithm wll be used.
    usePersistentThreads: 0 # If true, create data association threads once and re-use them

    # [Data Association] Thresholds in matching scores to be considered as a valid candidate for matching
    minMatchingScore4Overall: 0.0   # Min total score
    minMatchingScore4SizeSimilarity: 0.2    # Min bbox size similarity score
    minMatchingScore4Iou: 0.1       # Min IOU score
    minMatchingScore4VisualSimilarity: 0.2    # Min visual similarity score

    # [Data Association] Weights for each matching score term
    matchingScoreWeight4VisualSimilarity: 0.7  # Weight for the visual similarity (in terms of correlation response ratio)
    matchingScoreWeight4SizeSimilarity: 0.0    # Weight for the Size-similarity score
    matchingScoreWeight4Iou: 0.1               # Weight for the IOU score
    matchingScoreWeight4Age: 0.2               # Weight for the tracker age

    # [State Estimator]
    useTrackSmoothing: 1    # Use a state estimator
    stateEstimatorType: 1   # The type of state estimator among { moving_avg:1, kalman_filter:2 }

    # [State Estimator] [MovingAvgEstimator]
    trackExponentialSmoothingLr_loc: 0.9       # Learning rate for new location
    trackExponentialSmoothingLr_scale: 0.9     # Learning rate for new scale
    trackExponentialSmoothingLr_velocity: 0.9  # Learning rate for new velocity

    # [State Estimator] [Kalman Filter]
    kfProcessNoiseVar4Loc: 0.1   # Process noise variance for location in Kalman filter
    kfProcessNoiseVar4Scale: 0.04   # Process noise variance for scale in Kalman filter
    kfProcessNoiseVar4Vel: 0.04   # Process noise variance for velocity in Kalman filter
    kfMeasurementNoiseVar4Trk: 9   # Measurement noise variance for tracker's detection in Kalman filter
    kfMeasurementNoiseVar4Det: 9   # Measurement noise variance for detector's detection in Kalman filter

    # [Past-frame Data]
    useBufferedOutput: 0   # Enable storing of past-frame data in a buffer and report it back

    # [Instance-awareness]
    useInstanceAwareness: 1 # Use instance-awareness for multi-object tracking
    lambda_ia: 2            # Regularlization factor for each instance
    maxInstanceNum_ia: 4    # The number of nearby object instances to use for instance-awareness


• Hardware Platform: T4/Jetson
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only): 4.5.1

Can you check the CPU usage? Can you share some video to let us understand the issue?

I am hesitant to share the video as it belongs to my customer. I will check CPU usage and let you know, since we’re using NvDCF, that shouldn’t matter, correct?

On another note, is this a known issue? Has anyone faced it yet? I have changed some of the tracker parameters, I’m wondering if this is an effect of the new parameters.

I got the permission to share the video, I have shared it to personal message. Please let me know if you got a chance to look at it. Thanks.

I checked you video. I found bounding box change to blue when the center of bottle line of bounding box cross the line. Seems the behavior is expected.

  1. Do you use LIVE source (RTST) or local file?
  2. Where are you add probe function? When you crop the image? How you crop the image?

Regards,
Kevin

I found bounding box change to blue when the center of bottle line of bounding box cross the line. Seems the behaviour is expected

No, I don’t think this is expected. Most of the times the person is very far away when the line crossing status is changed. Furthermore, we are using extended=0 which means the line crossing will only be considered when the person crosses the actual line (and not the extended one), refer following ROI config for instance:

[property]
#Width height used for configuration to which below configs are configured
enable = 1
config-width = 1920
config-height = 1080


[line-crossing-stream-0]
enable = 1
line-crossing-Entry-0_1_1 = 483;909;707;707;485;614;999;817;
line-crossing-Exit-0_1_1 = 853;580;717;699;485;614;999;817;
line-crossing-Entry-0_2_2 = 1359;955;1457;797;1143;717;1744;855;
line-crossing-Exit-0_2_2 = 1540;648;1467;780;1143;717;1744;855;
class-id = 0
extended = 0
mode = loose

Do you use LIVE source (RTST) or local file?

Local mp4 files that are stored on disk.

Where are you add probe function? When you crop the image? How you crop the image?

We are adding probe function to the tiler element. Following is our pipeline if you are interested.

nvstreammux -> nvinfer -> nvtracker -> nvdsanalytics -> nvtiler -> nvvidconv1 ->
tee --> queue1 -> nvosd -> nvidconv2-> encoder-> parser-> muxer -> filesink
 |
 --> queue2 -> fpsdisplaysink

Also, we are cropping and saving the image as per the status from NvDsAnalyticsObjInfo data structure. if the lcStatus vector size is > 0 this is when we know the object has crossed a line. Following is the function that saves the cropped buffer (which is standard boilerplate IMO)

/// Converts nv12 buffer to RGB frame
/// \param input_buf : input source uris (file/RTSP) for pipeline
/// \param frame_idx : index of frame to be
/// \param rgbMat    : reference variable to store converted rgb frame
/// \param crop_rect_params: Coordinates for nv12 to rgb conversion
/// \return rgb conversion status
bool BasePipeline::get_rgb_mat(NvBufSurface *input_buf,
                               guint frame_idx,
                               cv::Mat &rgbMat,
                               NvOSD_RectParams *crop_rect_params) {
    //transform nv12 to rgba
    NvBufSurfTransform_Error       err;
    NvBufSurfTransformConfigParams transform_config_params;
    NvBufSurfTransformParams       transform_params;
    NvBufSurfTransformRect         src_rect;
    NvBufSurfTransformRect         dst_rect;
    NvBufSurface                   ip_surf;
    ip_surf = *input_buf;
    ip_surf.numFilled   = ip_surf.batchSize = 1;
    ip_surf.surfaceList = &(input_buf->surfaceList[frame_idx]);

    gint src_left    = GST_ROUND_UP_2((unsigned int) crop_rect_params->left);
    gint src_top     = GST_ROUND_UP_2((unsigned int) crop_rect_params->top);
    gint src_width   = GST_ROUND_DOWN_2((unsigned int) crop_rect_params->width);
    gint src_height  = GST_ROUND_DOWN_2((unsigned int) crop_rect_params->height);
    gint dest_width  = src_width;
    gint dest_height = src_height;

    /* Configure transform session parameters for the transformation */
    transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
    transform_config_params.gpu_id       = input_buf->gpuId;                 //process on same gpu on source stream
    transform_config_params.cuda_stream  = BasePipeline::cuda_stream;   //cuda stream

    err = NvBufSurfTransformSetSessionParams(&transform_config_params);
    if (err != NvBufSurfTransformError_Success) {
        SPDLOG_ERROR("NvBufSurfTransformSetSessionParams failed");
        return false;
    }

    /* Set the transform ROIs for source and destination */
    src_rect = {(guint) src_top, (guint) src_left, (guint) src_width, (guint) src_height};
    dst_rect = {0, 0, (guint) dest_width, (guint) dest_height};

    transform_params.src_rect         = &src_rect;
    transform_params.dst_rect         = &dst_rect;
    transform_params.transform_flag   = NVBUFSURF_TRANSFORM_FILTER |
                                        NVBUFSURF_TRANSFORM_CROP_SRC |
                                        NVBUFSURF_TRANSFORM_CROP_DST;
    transform_params.transform_filter = NvBufSurfTransformInter_Default;

    /* Memset the memory */
    NvBufSurfaceMemSet(BasePipeline::inter_buf, 0, 0, 0);
    err = NvBufSurfTransform(&ip_surf, BasePipeline::inter_buf, &transform_params);

    if (err != NvBufSurfTransformError_Success) {
        SPDLOG_ERROR("NvBufSurfTransform failed with error %d while converting buffer");
        return false;
    }
    /* Map the buffer so that it can be accessed by CPU */
    if (NvBufSurfaceMap(BasePipeline::inter_buf, 0, 0, NVBUF_MAP_READ) != 0) {
        return false;
    }
    NvBufSurfaceSyncForCpu(BasePipeline::inter_buf, 0, 0);
    cv::Mat *rgbaFrame = new cv::Mat(BasePipeline::inter_buf->surfaceList[0].height,
                                     BasePipeline::inter_buf->surfaceList[0].width,
                                     CV_8UC4,
                                     BasePipeline::inter_buf->surfaceList[0].mappedAddr.addr[0],
                                     BasePipeline::inter_buf->surfaceList[0].pitch);
#if (CV_MAJOR_VERSION >= 4)
    cv::cvtColor(*rgbaFrame, rgbMat, cv::COLOR_RGBA2BGR);
#else
    cv::cvtColor(*rgbaFrame, rgbMat, CV_RGBA2BGR);
#endif

    // free memory allocated for rgbFrame
    delete rgbaFrame;
    return true;
}

Another finding is the issue is only present in NvDCF tracker. If I run with KLT, or IOU tracker then the counts are correct. This corroborates my argument.

Opencv is run on CPU. Seems performance issue. Can you check with: sudo tegrastats

@kesong, that might not be the issue. I can reproduce the issue even after disabling the image saving functionality.

Sorry, I can’t find the delayed blue bonding box in your video. Can you point out the playing time of the delayed blue bounding box?
Which platform are you using? T4 or Jetson?

I have included the timestamps in the message I sent you. Please check. The bug is seen on T4, I will try to reproduce on Jetson and let you know.

Hi! Any update on this request? I could reproduce the issue by only using nvdsanalytics and gstreamer command line in the following manner:

gst-launch-1.0 filesrc location=<Path to input video file> \
! qtdemux ! h264parse ! nvv4l2decoder \
! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=<path to PGIE config file> \
! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so \
! nvdsanalytics config-file=<Path to NVDS Analytics Config file> \
! nvvideoconvert ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=<Path to output video File>

PGIE config path:

[property]
gpu-id = 0
net-scale-factor = 0.0039215697906911373
#0=RGB, 1=BGR
model-color-format = 0
custom-network-config = <Path to Network Config File>
model-engine-file= <Path to Model Engine File>
labelfile-path = <Path to Model Labels file>
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode = 2
num-detected-classes = 80
gie-unique-id = 10
network-type = 0
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode = 2
interval = 0
maintain-aspect-ratio = 1
parse-bbox-func-name = NvDsInferParseCustomYoloV4_person
custom-lib-path = <Path to Custom Lib File>
engine-create-func-name = NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold = 0.2
pre-cluster-threshold = 0.2

NvDsAnalytics config file:

[property]
#Width height used for configuration to which below configs are configured
enable = 1
config-width = 1920
config-height = 1080


[line-crossing-stream-0]
enable = 1
#Label;direction;lc
#Label-Entry/Exit_CamId_LineId_LineIndexInConfig
line-crossing-Entry = 924;534;925;594;746;567;1102;560;
line-crossing-Exit = 925;594;924;534;1102;560;746;567;
class-id = 0
extended = 0
mode = loose

I have already shared the timestamps of ghost counts with you. I will personal message you the custom library, and model weights. Request you to take a look at it ASAP.

Any update? @kesong? I have given you everything you need to reproduce the issue.

Sorry for later response. Crazy busy. I can reproduce it with below command line. We will check it and get back to you.

nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-nvdsanalytics-test$ ./deepstream-nvdsanalytics-test file:///mnt/share/15_2021-10-04-14-05-05_300.mp4

3417611

Hi, any update on this one?

It is one issue in nvdsanalytics. The issue will be fixed in following DS release. Thanks for reporting the issue.

Is it possible to share a temporary patch with me? We are using this in production and can’t wait till the new DS release.