Eglsink shows very slow when adjust nvtracker in pipeline

• Hardware Platform (Jetson / GPU) Jetson tx2 nx
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6.0
• TensorRT Version 10.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) questions

Hi, I run 3 pipelines as below.

[decoding pipeline]
appsrc ! h254parse ! nvv4l2decoder ! queue ! appsink

[analyze pipeline]
appsrc ! queue ! nvvideoconvert ! videoconvert ! capsfilter(video/x-raw, format=RGB) ! fakesink

[display pipeline]
appsrc ! nvvideoconvert ! capsfilter(video/x-raw(memory:NVMM), format=RGBA) ! nvstreammux ! nvtracker ! nvdsosd ! nvegltransform ! nvelgglessink

streammux setting: “live-source”, TRUE, “batch-size”, 1, “width”, 640, “height”, 360, “batched-push-timeout”, 33333

nvtracker setting:
“ll-lib-file”, “/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so”,
“ll-config-file”, “config_tracker_NvDCF_perf.yml”,
“tracker-width”, 512, “tracker-height”, 288, “enable-batch-process”, TRUE, “qos”, TRUE, “display-tracking-id”, TRUE

config_tracker_NvDCF_pref.yml

BaseConfig:
  minDetectorConfidence: 0   

TargetManagement:
  enableBboxUnClipping: 0   
  maxTargetsPerStream: 100

  minIouDiff4NewTarget: 0.5   
  minTrackerConfidence: 0.2  
  probationAge: 2
  maxShadowTrackingAge: 5 # I set this value to 5 because our own model perfom inference in every 5 frame. (6 frames in 1 second)
  earlyTerminationAge: 1   

TrajectoryManagement:
  useUniqueID: 0

DataAssociator:
  dataAssociatorType: 0
  associationMatcherType: 0 
  checkClassMatch: 1 

  minMatchingScore4Overall: 0.0
  minMatchingScore4SizeSimilarity: 0.6 
  minMatchingScore4Iou: 0.0    
  minMatchingScore4VisualSimilarity: 0.7

  matchingScoreWeight4VisualSimilarity: 0.6  
  matchingScoreWeight4SizeSimilarity: 0.0
  matchingScoreWeight4Iou: 0.4 

StateEstimator:
  stateEstimatorType: 1

  processNoiseVar4Loc: 2.0  
  processNoiseVar4Size: 1.0 
  processNoiseVar4Vel: 0.1
  measurementNoiseVar4Detector: 4.0 
  measurementNoiseVar4Tracker: 16.0

VisualTracker:
  visualTrackerType: 1 

  useColorNames: 1    
  useHog: 0  
  featureImgSizeLevel: 5
  featureFocusOffsetFactor_y: -0.2

  filterLr: 0.075
  filterChannelWeightsLr: 0.1
  gaussianSigma: 0.75 

Decoding pipeline’s appsink is connected to callback function deep copy gstBuffer and send it to analyze pipeline and display pipeline.

In analyze pipeline analyze function is connected to fakesink’s handoff callback function. Analyze function is our company’s own logic so I can’t use nvinfer or pgie element. when analyze is finish, it sends result to display pipeline.

I added probe function callback to nvtracker element to add NvDsObjectMeta manually.
This is the code adding NvDsObjectMeta

GstBuffer* buffer = GST_PAD_PROBE_INFO_BUFFER(info);

    if (!buffer) {
        return GST_PAD_PROBE_OK;
    }

   // this is custom meta added to gstBuffer
    SpotMeta* spotMeta = reinterpret_cast<SpotMeta *>(gst_buffer_get_meta(buffer, MetaUtil::spotMetaApiGetType()));
    if (!spotMeta) {
        g_printerr("NO SPOT META!!\n");
        return GST_PAD_PROBE_OK;
    }
    uint32_t channel = spotMeta->channel;

    if (_objects[channel].empty()) {
        return GST_PAD_PROBE_OK;
    }

    NvDsBatchMeta* batchMeta = gst_buffer_get_nvds_batch_meta(buffer);

    if (!batchMeta) {
        console().print(_LP_ERR_, L"NO BATCH META!!");
        return GST_PAD_PROBE_OK;
    }

    NvDsFrameMeta* frameMeta = batchMeta->frame_meta_list->data;

    if(!frameMeta) {
        g_printerr("NO FRAME META!!\n");
        return GST_PAD_PROBE_OK;
    }

    uint32_t frameWidth = frameMeta->source_frame_width;
    uint32_t frameHeight = frameMeta->source_frame_height;

    NvDsDisplayMeta* displayMeta = nvds_acquire_display_meta_from_pool(batchMeta);

    if (!displayMeta) {
        g_printerr("NO DISPLAY META!!\n");
        return GST_PAD_PROBE_OK;
    }
    // own object contains normalized point of bbox
    PointF32 leftTop, bottomRight;
    Object::Type type;

    nvds_acquire_meta_lock(batchMeta);
    frameMeta->bInferDone = TRUE;
    for (const auto obj : _objects[channel]) {
        NvDsObjectMeta* objMeta = nvds_acquire_obj_meta_from_pool(batchMeta);

        if (!objMeta) {
            g_printerr("NO OBJ META!!\n");
            return GST_PAD_PROBE_OK;
        }

        objMeta->unique_component_id = 1;
        objMeta->confidence = obj.confidence();

        objMeta->object_id = UNTRACKED_OBJECT_ID;
        objMeta->class_id = type;

        NvOSD_RectParams& rectParams = objMeta->rect_params;
        NvOSD_TextParams& textParams = objMeta->text_params;

        leftTop = obj.boundingBox().leftTop();
        bottomRight = obj.boundingBox().bottomRight();
        type = obj.type();

        rectParams.left = leftTop.x() * frameWidth;
        rectParams.top = leftTop.y() * frameHeight;
        rectParams.width = (bottomRight.x() - leftTop.x()) * frameWidth;
        rectParams.height = (bottomRight.y() - leftTop.y()) * frameHeight;

        NvDsComp_BboxInfo* detectorBbox = &(objMeta->detector_bbox_info);

        detectorBbox->org_bbox_coords.left = rectParams.left;
        detectorBbox->org_bbox_coords.top = rectParams.top;
        detectorBbox->org_bbox_coords.width = rectParams.width;
        detectorBbox->org_bbox_coords.height = rectParams.height;

        rectParams.border_width = 3;
        rectParams.has_bg_color = 0;
        rectParams.bg_color = (NvOSD_ColorParams) { 1, 1, 0, 0.4 };
        rectParams.border_color = (NvOSD_ColorParams) { 1, 0, 0, 1 };

        textParams.x_offset = objMeta->rect_params.left;
        textParams.y_offset = objMeta->rect_params.top - 20;
        textParams.set_bg_clr = 1;
        textParams.font_params.font_name = (gchar*)"Serif";
        textParams.font_params.font_size = 8;
        if (type == Type::HUMAN) {
            textParams.display_text = g_strdup("Human");
            g_strlcpy(objMeta->obj_label, textParams.display_text, MAX_LABEL_SIZE);
        }
        else if (type == Type::VEHICLE) {
            textParams.display_text = g_strdup("Vehicle");
            g_strlcpy(objMeta->obj_label, textParams.display_text, MAX_LABEL_SIZE);
        }
        else if (type == Type::BICYCLE) {
            textParams.display_text = g_strdup("Bicycle");
            g_strlcpy(objMeta->obj_label, textParams.display_text, MAX_LABEL_SIZE);
        }
        else {
            textParams.display_text = g_strdup("UNKNOWN");
            g_strlcpy(objMeta->obj_label, textParams.display_text, MAX_LABEL_SIZE);
        }
        textParams.font_params.font_color = (NvOSD_ColorParams){ 1, 1, 1, 1 };
        nvds_add_obj_meta_to_frame(frameMeta, objMeta, NULL);
    }

    if (!_objects[channel].empty()) {
        _objects[channel].clear();
    }
    nvds_release_meta_lock(batchMeta);

    return GST_PAD_PROBE_OK;

If I don’t use nvtracker and link this function in osd probe pad, stream looks good.
This is video compare using nvtracker and not using nvtracker.

What did I wrong? Please help me.

Thanks.


I don 't know why video is not playing so uploaded another codec version.

nvtracker will occupy resources. You can try different type of tracker based on here: Gst-nvtracker — DeepStream documentation

Hi, Thanks for reply. I tried to change tracker from NvDCF to IOU used config file from /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_tracker_IOU.yml
But video is still stuttering like the video I uploaded.
Also I decresed inference interval from 6 to 5, there was no improvement.
Is there anything more I can do for smooth tracking?
Thanks.

Can you have a try to add “queue” between plugins? Can you have a try to set “sync=false” for nvelgglessink?