Test the tracker alone

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson NX
• DeepStream Version
6.1
• JetPack Version (valid for Jetson only)
Version: 5.0.2-b231
• TensorRT Version
8.4.1.5-1+cuda11.4
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
can not run tracker alone
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

hi,
we want to test the tracker by itself providing bbox manually. I tried to use meta data to pass a fake bbox at the beginning of stream . the box appears and disappears right away. I tried to tune the tracker increasing maxShadowTrackingAge and others or testing with other trackers like deepsort but the bbox will just disappear. below is the sample I use to pass object meta data to tracker. I use nvds_add_obj_meta_to_frame. note that we initiate this bbox once with track_is_init and let the tracker continue the work but the box just disappear right away within the next frame.

#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys
sys.path.append('../')
import platform
import configparser

import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call

import pyds

PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3
UNTRACKED_OBJECT_ID = 0xffffffffffffffff

track_is_init = 0
past_tracking_meta=[0]
past_tracking_meta[0]=0 # ::os:: cause screen to be black and freeze, too much data? 
def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
    }
    num_rects=0

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.glist_get_nvds_frame_meta()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            #frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                #obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            print("object found")
            print(obj_meta.obj_label)
            obj_counter[obj_meta.class_id] += 1
            obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
	
	
	
	#past traking meta data
        if(past_tracking_meta[0]==1):
        	l_user=batch_meta.batch_user_meta_list
        	while l_user is not None:
        		try:
        			# Note that l_user.data needs a cast to pyds.NvDsUserMeta
        			# The casting is done by pyds.NvDsUserMeta.cast()
        			# The casting also keeps ownership of the underlying memory
        			# in the C code, so the Python garbage collector will leave
        			# it alone
        			user_meta=pyds.NvDsUserMeta.cast(l_user.data)
        		except StopIteration:
        			break
        		if(user_meta and user_meta.base_meta.meta_type==pyds.NvDsMetaType.NVDS_TRACKER_PAST_FRAME_META):
        			try:
        				# Note that user_meta.user_meta_data needs a cast to pyds.NvDsPastFrameObjBatch
        				# The casting is done by pyds.NvDsPastFrameObjBatch.cast()
        				# The casting also keeps ownership of the underlying memory
        				# in the C code, so the Python garbage collector will leave
        				# it alone
        				pPastFrameObjBatch = pyds.NvDsPastFrameObjBatch.cast(user_meta.user_meta_data)
        			except StopIteration:
        				break
        		for trackobj in pyds.NvDsPastFrameObjBatch.list(pPastFrameObjBatch):
        			print("streamId=",trackobj.streamID)
        			print("surfaceStreamID=",trackobj.surfaceStreamID)
        			for pastframeobj in pyds.NvDsPastFrameObjStream.list(trackobj):
        				print("numobj=",pastframeobj.numObj)
        				print("uniqueId=",pastframeobj.uniqueId)
        				print("classId=",pastframeobj.classId)
        				print("objLabel=",pastframeobj.objLabel)
        				for objlist in pyds.NvDsPastFrameObjList.list(pastframeobj):
        					print('frameNum:', objlist.frameNum)
        					print('tBbox.left:', objlist.tBbox.left)
        					print('tBbox.width:', objlist.tBbox.width)
        					print('tBbox.top:', objlist.tBbox.top)
        					print('tBbox.right:', objlist.tBbox.height)
        					print('confidence:', objlist.confidence)
        					print('age:', objlist.age)
                            		
    return Gst.PadProbeReturn.OK	

def streammux_src_pad_buffer_probe(pad,info,u_data):
    global track_is_init
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    print("streammux_src_pad_buffer_probe")
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.glist_get_nvds_frame_meta()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            #frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break
        new_object = pyds.nvds_acquire_obj_meta_from_pool(batch_meta)
        #new_object.object_id = 0 #UNTRACKED_OBJECT_ID ::os:: cause to disapear (not tracked?) 
        new_object.unique_component_id = 1
        new_object.class_id = 2
        new_object.confidence = 1.0
        new_object.obj_label = 'Test'
        new_object.rect_params.top = 500.0
        new_object.rect_params.left = 338.0
        new_object.rect_params.width = 151.0
        new_object.rect_params.height = 382.0
        new_object.rect_params.border_width = 3;
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
    if (track_is_init<15):
    	pyds.nvds_add_obj_meta_to_frame(frame_meta, new_object, None)
    	track_is_init+=1
    return Gst.PadProbeReturn.OK

def main(args):
    # Check input arguments
    if len(args) != 2:
        sys.stderr.write("usage: %s <media file or uri>\n" % args[0])
        sys.exit(1)

    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    if not tracker:
        sys.stderr.write(" Unable to create tracker \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    # Use convertor to convert from NV12 to RGBA as required by nvosd
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    # Finally render the osd output
    if is_aarch64():
        transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

    print("Creating EGLSink \n")
    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])
    source.set_property('location', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)

    ########## Set properties of tracker
    config = configparser.ConfigParser()
    config.read('./dstest2_tracker_config.txt')
    config.sections()
    for key in config['tracker']:
        if key == 'tracker-width' :
            tracker_width = config.getint('tracker', key)
            tracker.set_property('tracker-width', tracker_width)
        if key == 'tracker-height' :
            tracker_height = config.getint('tracker', key)
            tracker.set_property('tracker-height', tracker_height)
        if key == 'gpu-id' :
            tracker_gpu_id = config.getint('tracker', key)
            tracker.set_property('gpu_id', tracker_gpu_id)
        if key == 'll-lib-file' :
            tracker_ll_lib_file = config.get('tracker', key)
            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
        if key == 'll-config-file' :
            tracker_ll_config_file = config.get('tracker', key)
            tracker.set_property('ll-config-file', tracker_ll_config_file)
        if key == 'enable-batch-process' :
            tracker_enable_batch_process = config.getint('tracker', key)
            tracker.set_property('enable_batch_process', tracker_enable_batch_process)
        if key == 'enable-past-frame' :
            tracker_enable_past_frame = config.getint('tracker', key)
            tracker.set_property('enable_past_frame', tracker_enable_past_frame)


    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(nvvidconv)
    pipeline.add(tracker)
    
    pipeline.add(nvosd)
    pipeline.add(sink)
    if is_aarch64():
        pipeline.add(transform)

    # we link the elements together
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> video-renderer
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(tracker)
    tracker.link(nvvidconv)
    nvvidconv.link(nvosd)
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)
    muxsrcpad = streammux.get_static_pad("src")
    if not muxsrcpad:
        sys.stderr.write(" Unable to get sink pad of streammux \n")
    muxsrcpad.add_probe(Gst.PadProbeType.BUFFER, streammux_src_pad_buffer_probe, 0)

    # start play back and listen to events
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    sys.exit(main(sys.argv))

below is my config_tracker_NvDCF_accuracy.yml file. I alos set
tracker-width=960
tracker-height=544
to increase features. I also try with the defaults of Deepsort with ReID engine setup and built

%YAML:1.0
################################################################################
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

BaseConfig:
  minDetectorConfidence: 0   # If the confidence of a detector bbox is lower than this, then it won't be considered for tracking

TargetManagement:
  enableBboxUnClipping: 1   # In case the bbox is likely to be clipped by image border, unclip bbox
  preserveStreamUpdateOrder: 0 # When assigning new target ids, preserve input streams' order to keep target ids in a deterministic order over multuple runs
  maxTargetsPerStream: 150  # Max number of targets to track per stream. Recommended to set >10. Note: this value should account for the targets being tracked in shadow mode as well. Max value depends on the GPU memory capacity

  # [Creation & Termination Policy]
  minIouDiff4NewTarget: 0.5   # If the IOU between the newly detected object and any of the existing targets is higher than this threshold, this newly detected object will be discarded.
  minTrackerConfidence: 0.2   # If the confidence of an object tracker is lower than this on the fly, then it will be tracked in shadow mode. Valid Range: [0.0, 1.0]
  probationAge: 3 # If the target's age exceeds this, the target will be considered to be valid.
  maxShadowTrackingAge: 190   # Max length of shadow tracking. If the shadowTrackingAge exceeds this limit, the tracker will be terminated.
  earlyTerminationAge: 1      # If the shadowTrackingAge reaches this threshold while in TENTATIVE period, the target will be terminated prematurely.

TrajectoryManagement:
  useUniqueID: 0   # Use 64-bit long Unique ID when assignining tracker ID. Default is [true]
  enableReAssoc: 1    # Enable Re-Assoc

  # [Re-Assoc: Motion-based]  
  minTrajectoryLength4Projection: 20  # min trajectory length required to make projected trajectory
  prepLength4TrajectoryProjection: 10  # the length of the trajectory during which the state estimator is updated to make projections
  trajectoryProjectionLength: 90  # the length of the projected trajectory

  # [Re-Assoc: Trajectory Similarity]
  minTrackletMatchingScore: 0.5   # min tracklet similarity score for matching
  maxAngle4TrackletMatching: 30   # max angle difference for tracklet matching [degree]
  minSpeedSimilarity4TrackletMatching: 0.2 # min speed similarity for tracklet matching
  minBboxSizeSimilarity4TrackletMatching: 0.6 # min bbox size similarity for tracklet matching
  maxTrackletMatchingTimeSearchRange: 20      # the search space in time for max tracklet similarity

DataAssociator:
  dataAssociatorType: 0 # the type of data associator among { DEFAULT= 0 }
  associationMatcherType: 0 # the type of matching algorithm among { GREEDY=0, GLOBAL=1 }
  checkClassMatch: 1  # If checked, only the same-class objects are associated with each other. Default: true

  # [Association Metric: Thresholds for valid candidates]
  minMatchingScore4Overall: 0.0   # Min total score
  minMatchingScore4SizeSimilarity: 0.5  # Min bbox size similarity score
  minMatchingScore4Iou: 0.3       # Min IOU score
  minMatchingScore4VisualSimilarity: 0.6  # Min visual similarity score

  # [Association Metric: Weights]
  matchingScoreWeight4VisualSimilarity: 0.5  # Weight for the visual similarity (in terms of correlation response ratio)
  matchingScoreWeight4SizeSimilarity: 0.0    # Weight for the Size-similarity score
  matchingScoreWeight4Iou: 0.1   # Weight for the IOU score

StateEstimator:
  stateEstimatorType: 1  # the type of state estimator among { DUMMY=0, SIMPLE=1, REGULAR=2 }

  # [Dynamics Modeling]
  processNoiseVar4Loc: 3.0    # Process noise variance for bbox center
  processNoiseVar4Size: 1.0   # Process noise variance for bbox size
  #processNoiseVar4Vel: 0.1    # ::os:: Original::  Process noise variance for velocity
  processNoiseVar4Vel: 0.2    # Process noise variance for velocity	
  measurementNoiseVar4Detector: 2.0    # Measurement noise variance for detector's detection
  measurementNoiseVar4Tracker: 10.0    # Measurement noise variance for tracker's localization

VisualTracker:
  visualTrackerType: 1 # the type of visual tracker among { DUMMY=0, NvDCF=1 }

  # [NvDCF: Feature Extraction]
  useColorNames: 1     # Use ColorNames feature
  useHog: 1            # Use Histogram-of-Oriented-Gradient (HOG) feature
  featureImgSizeLevel: 3  # Size of a feature image. Valid range: {1, 2, 3, 4, 5}, from the smallest to the largest
  featureFocusOffsetFactor_y: -0.2 # The offset for the center of hanning window relative to the feature height. The center of hanning window would move by (featureFocusOffsetFactor_y*featureMatSize.height) in vertical direction

  # [NvDCF: Correlation Filter]
  filterLr: 0.075 # learning rate for DCF filter in exponential moving average. Valid Range: [0.0, 1.0]
  filterChannelWeightsLr: 0.1 # learning rate for the channel weights among feature channels. Valid Range: [0.0, 1.0]
  gaussianSigma: 0.75 # Standard deviation for Gaussian for desired response when creating DCF filter [pixels]

here is how I run the demo

/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test4$ sudo python3 ./test_tracker_only_add_obj2.py /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264

please advise as I got stuck in this for days

thank you

Does it work if add nvinfer before nvtracker?

thank you for your response
the tracker is loaded and I made sure it is working by increasing the detection interval in another demo and it runs fine until the next detection. if I modify the current demo to add an infer before tracker element and prob on osd. it is the same issue. I get my test obj_label downstream in osd prob print for first frame but it is flushed from tracker next time. even though I add new object in muxsrcpad prob before the tracker element. notice I set new_object.object_id =0, I also try to not set it at all hoping the tracker would assaign it a value. should I set it to another value for the tracker to recognize it?

below is the program output with an additional inference element before tracker:

Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

test_tracker_only_add_obj4_filter.py:367: PyGIDeprecationWarning: GObject.MainLoop is deprecated; use GLib.MainLoop instead
  loop = GObject.MainLoop()
Starting pipeline 


Using winsys: x11 
Opening in BLOCKING MODE 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Loading TRT Engine for tracker ReID...
**[NvMultiObjectTracker] Loading Complete!**
[NvMultiObjectTracker] Initialized
0:00:04.967688117 47024     0x39639920 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:08.113835078 47024     0x39639920 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:08.175831540 47024     0x39639920 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:08.185811282 47024     0x39639920 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest4_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
streammux_src_pad_buffer_probe
streammux_src_pad_buffer_probe
streammux_src_pad_buffer_probe
streammux_src_pad_buffer_probe
NvDsObjectMeta found
**obj_meta.obj_label = Test**
Frame Number=0 Number of Objects=1 Vehicle_count=0 Person_count=1
streammux_src_pad_buffer_probe
Frame Number=1 Number of Objects=0 Vehicle_count=0 Person_count=0
streammux_src_pad_buffer_probe
Frame Number=2 Number of Objects=0 Vehicle_count=0 Person_count=0
streammux_src_pad_buffer_probe
Frame Number=3 Number of Objects=0 Vehicle_count=0 Person_count=0
streammux_src_pad_buffer_probe
Frame Number=4 Number of Objects=0 Vehicle_count=0 Person_count=0
streammux_src_pad_buffer_probe
Frame Number=5 Number of Objects=0 Vehicle_count=0 Person_count=0
NvDsObjectMeta found
obj_meta.obj_label = Car
NvDsObjectMeta found
obj_meta.obj_label = Car
NvDsObjectMeta found
obj_meta.obj_label = Car
NvDsObjectMeta found
obj_meta.obj_label = Person
NvDsObjectMeta found
obj_meta.obj_label = Person
Frame Number=6 Number of Objects=5 Vehicle_count=3 Person_count=2 
.....
....

if you enable past_tracking_meta=1 the tracker output loop below would just print streamId=0,
surfaceStreamID=0 forever but there is no prints for any pastframeobj in pyds.NvDsPastFrameObjStream.list(trackobj) loop

for trackobj in pyds.NvDsPastFrameObjBatch.list(pPastFrameObjBatch):
        			print("streamId=",trackobj.streamID)
        			print("surfaceStreamID=",trackobj.surfaceStreamID)
        			for pastframeobj in pyds.NvDsPastFrameObjStream.list(trackobj):
        				print("numobj=",pastframeobj.numObj)
        				print("uniqueId=",pastframeobj.uniqueId)

any updates? I am stuck at this

nvtracker depends on the detect output the right bbox for the object. The bbox will move based on the object move. So nvtracker can get the right input and do the tracking. If the input of nvtracker isn’t match the object, the tracker can’t work. Please use below samples apps/­deepstream-test2 to have a check:

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Python_Sample_Apps.html

yes, that is why I am creating a fake bbox detection which is shown and registered as an object in my last post log.

NvDsObjectMeta found
**obj_meta.obj_label = Test**

I also tested increasing the detection interval in deepstream-test2 example which shows that the tracker can actually work for a long time alone. but if we introduce our obj bbox it register as NvDsObjectMeta and appears for one frame on nvosd display but fail to initiate the tracker with it like other objects. clearly I am doing something wrong and I really appreciate it if you give more details of why nvtracker fail to start with a good bbox fake detecting a person. I already have seen all the samples

do I need to change where I am probing at or add new elements?

 muxsrcpad = streammux.get_static_pad("src")

or is it the parameter of the new object which need modifying:

        #new_object.object_id = 0 #UNTRACKED_OBJECT_ID ::os:: cause to disapear (not tracked?) 
        new_object.unique_component_id = 1
        new_object.class_id = 2
        new_object.confidence = 1.0
        new_object.obj_label = 'Test'
        new_object.rect_params.top = 500.0
        new_object.rect_params.left = 338.0
        new_object.rect_params.width = 151.0
        new_object.rect_params.height = 382.0
        new_object.rect_params.border_width = 3;

or maybe it is related to configuration.txt files such as network-type=100 or something special
can you try reproducing the issue please

thanks

I think you need set the right bbox for every frame. so nvtracker can get the right input and track it.

I am stuck with the exact same issue as @osos . Specifically, I’m trying to use tracker with the Faciallandmark estimator.

This is not a detector and has network-type=100. @kesong could you eleborate on how to set the right bbox for every frame? isn’t that what we are doing when setting the rect_params for a new_object?

Can you submit new topic for your issue. Thanks.

I already tried to set the bbox more then one time for the tracker estimator to build the trust with it’s covariance but it is useless. I play with tracker configurations too. I think the issue is universal and faced by many, could you ask someone from the tracker team if this could be achieved .

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
Thanks

Is it possible to send the same image repeat to tracker for this kind of test?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.