Obtain age of tracker in DeepStream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 3090
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 535.113.01
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) I am using the deepstream-demux-multi-in-multi-out sample app from deepstream_python_apps. I have added a tracker and in the pgie_src_pad_buffer_probe function that is already included, I have added print(obj_meta.object_id) to check the ids.

Now I want to check the age of each to develop other things, how can I check this age every frame? I have tried with NvDsTrackerMeta but the information is not printed every frame:

Which tracker are you using? NvDCF or NvDeepSORT? Please also share how you print NvDsTrackerMeta . Can you share all the config files?

Hi, fisrt of all thank you for answering!

I am using the NvDCF tracker.

I print the NvDsTrackerMeta like in the deepstream_test_2.example:

def osd_sink_pad_buffer_probe(pad, info, u_data):
# Initializing object counter with 0.
obj_counter = {
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")

# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
# C address of gst_buffer as input, which is obtained with hash(gst_buffer)
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
        # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
        # The casting is done by pyds.NvDsFrameMeta.cast()
        # The casting also keeps ownership of the underlying memory
        # in the C code, so the Python garbage collector will leave
        # it alone.
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
    except StopIteration:

    frame_number = frame_meta.frame_num
    num_rects = frame_meta.num_obj_meta
    l_obj = frame_meta.obj_meta_list
    while l_obj is not None:
            # Casting l_obj.data to pyds.NvDsObjectMeta
            obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
        except StopIteration:
        obj_counter[obj_meta.class_id] += 1
            l_obj = l_obj.next
        except StopIteration:

    # Acquiring a display meta object. The memory ownership remains in
    # the C code so downstream plugins can still access it. Otherwise
    # the garbage collector will claim it when this probe function exits.
    display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
    display_meta.num_labels = 1
    py_nvosd_text_params = display_meta.text_params[0]
    # Setting display text to be shown on screen
    # Note that the pyds module allocates a buffer for the string, and the
    # memory will not be claimed by the garbage collector.
    # Reading the display_text field here will return the C address of the
    # allocated string. Use pyds.get_string() to get the string content.
    py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(
        frame_number, num_rects, obj_counter[PGIE_CLASS_ID_CAMION_NORMAL],

    # Now set the offsets where the string should appear
    py_nvosd_text_params.x_offset = 10
    py_nvosd_text_params.y_offset = 12

    # Font , font-color and font-size
    py_nvosd_text_params.font_params.font_name = "Serif"
    py_nvosd_text_params.font_params.font_size = 10
    # set(red, green, blue, alpha); set to White
    py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

    # Text background color
    py_nvosd_text_params.set_bg_clr = 1
    # set(red, green, blue, alpha); set to Black
    py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
    # Using pyds.get_string() to get display_text as string
    pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        l_frame = l_frame.next
    except StopIteration:

# past tracking meta data
l_user = batch_meta.batch_user_meta_list
while l_user is not None:
        # Note that l_user.data needs a cast to pyds.NvDsUserMeta
        # The casting is done by pyds.NvDsUserMeta.cast()
        # The casting also keeps ownership of the underlying memory
        # in the C code, so the Python garbage collector will leave
        # it alone
        user_meta = pyds.NvDsUserMeta.cast(l_user.data)
    except StopIteration:
    if user_meta and user_meta.base_meta.meta_type == pyds.NvDsMetaType.NVDS_TRACKER_PAST_FRAME_META:
            # Note that user_meta.user_meta_data needs a cast to pyds.NvDsPastFrameObjBatch
            # The casting is done by pyds.NvDsPastFrameObjBatch.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone
            pPastFrameObjBatch = pyds.NvDsPastFrameObjBatch.cast(user_meta.user_meta_data)
        except StopIteration:
        for trackobj in pyds.NvDsPastFrameObjBatch.list(pPastFrameObjBatch):
            print("streamId=", trackobj.streamID)
            print("surfaceStreamID=", trackobj.surfaceStreamID)
            for pastframeobj in pyds.NvDsPastFrameObjStream.list(trackobj):
                print("numobj=", pastframeobj.numObj)
                print("uniqueId=", pastframeobj.uniqueId)
                print("classId=", pastframeobj.classId)
                print("objLabel=", pastframeobj.objLabel)
                for objlist in pyds.NvDsPastFrameObjList.list(pastframeobj):
                    print('frameNum:', objlist.frameNum)
                    print('tBbox.left:', objlist.tBbox.left)
                    print('tBbox.width:', objlist.tBbox.width)
                    print('tBbox.top:', objlist.tBbox.top)
                    print('tBbox.right:', objlist.tBbox.height)
                    print('confidence:', objlist.confidence)
                    print('age:', objlist.age)
        l_user = l_user.next
    except StopIteration:
return Gst.PadProbeReturn.OK

I add the infer and tracker config files.

Thank you!
config_infer_primary_yoloV5.txt (880 Bytes)
tracker_config.txt (1.2 KB)

You print the age of “past frame”. Unfortunately, object hasn’t the age parameter report. You can’t get the tracking age for every object.

Can you share more detals of why you need the age of tracking object?

Based on what does the “past frame” information appear? I mean, what causes this information to appear?

I need the age because I want to compare the age of tracking objects that have the same ID to discard the one that is the newest one.

Regarding “past frame”, please refer: Gst-nvtracker — DeepStream 6.3 Release documentation

Why you need discard object? Do you mean the nvtracker result isn’t accurate?

No, it is accurate, but I am developing an application with multiple cameras and I have cases where the same object appears in more than one camera and I want to assign the same ID to all of them, taking into account the oldest tracking ID.

Can you refer here for multiple camera tracking: Metropolis - Multi-camera Tracking

I have already seen that demo, but I don’t have access to any code or info about how it was developed.