Re-ID vector embedding is incorrect when using new streammuxer

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): RTX 4080
• DeepStream Version: 6.3, 7.0
• TensorRT Version: 8.5.3.1 for DS 6.3, 8.6.1.6 for DS 7.0
• NVIDIA GPU Driver Version (valid for GPU only): 535.171.04
• Issue Type( questions, new requirements, bugs): Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I’m currently using DeepStream Python.

I slightly modified the DeepStream Python bindings to access Re-ID vector embedding.

When I used original streammuxer, the Re-ID vector embedding was fine.

However, using the new streammuxer outputs a different Re-ID vector embedding than the original streammuxer.

For comparison, I dumped the images and found that very different images output similar Re-ID vector embeddings.

This may be a bug, which sample are you using?

Or can you share sample code that reproduces the problem?

Unfortunately, I cannot provide full source code.

However my implementation quite similar to below.

# uridecodebin -  nvstreammux - nvinfer (person detector) - nvtracker - nvvideoconvert - caps - nvosd - nveglglessink

def caps_src_pad_buffer_probe(unused_pad: Gst.Pad, info: Gst.PadProbeInfo, user_data: Any) -> Gst.PadProbeReturn:
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        logger.error("Unable to get Gst.Buffer ")
        return

    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))

    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        arr = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
        rgba = np.array(arr, copy=True, order="C")
        rgb = cv2.cvtColor(rgba, cv2.COLOR_RGBA2RGB)

        l_obj = frame_meta.obj_meta_list
        while l_obj is not None:
            try:

                obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break

            if obj_meta.object_id != UNTRACKED_OBJECT_ID:
                user_meta, feature = None, None
                l_user_meta = obj_meta.obj_user_meta_list
                while l_user_meta is not None:
                    try:
                        user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
                    except StopIteration:
                        break

                    if user_meta.base_meta.meta_type == pyds.NvDsMetaType.NVDS_TRACKER_OBJ_REID_META:
                        # custom python binding for NvDsObjReid
                        objReid = pyds.NvDsObjReid.cast(user_meta.user_meta_data)
                        feature = objReid.get_feature().copy()
                        # dump ReID vector embedding

                    try:
                        l_user_meta = l_user_meta.next
                    except StopIteration:
                        break

                if feature is not None:
                    rect_params = obj_meta.rect_params
                    box = np.asarray([rect_params.left, rect_params.top, rect_params.width, rect_params.height])
                    box[2:] += box[:2]

                    x1, y1, x2, y2 = box.astype(int)
                    image = rgb[y1 : y2 + 1, x1 : x2 + 1, :].copy()

                    # save above image
               
            try:
                l_obj = l_obj.next
            except StopIteration:
                break

        try:
            l_frame = l_frame.next
        except StopIteration:
            break

    return Gst.PadProbeReturn.OK

Did you add the bindings of pyds.NvDsObjReid by yourself? I used deepstream-app for testing.

Modify the source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt as below.

--- a/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
+++ b/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
@@ -14,6 +14,8 @@
 enable-perf-measurement=1
 perf-measurement-interval-sec=5
 #gie-kitti-output-dir=streamscl
+reid-track-output-dir=reid-new
+
 
 [tiled-display]
 enable=1
@@ -45,7 +47,7 @@ cudadec-memtype=0
 [sink0]
 enable=1
 #Type - 1=FakeSink 2=EglSink/nv3dsink (Jetson only) 3=File
-type=2
+type=1
 sync=1
 source-id=0
 gpu-id=0
@@ -154,8 +156,8 @@ ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.s
 # ll-config-file required to set different tracker types
 # ll-config-file=config_tracker_IOU.yml
 # ll-config-file=config_tracker_NvSORT.yml
-ll-config-file=config_tracker_NvDCF_perf.yml
-# ll-config-file=config_tracker_NvDCF_accuracy.yml
+# ll-config-file=config_tracker_NvDCF_perf.yml
+ll-config-file=config_tracker_NvDCF_accuracy.yml
 # ll-config-file=config_tracker_NvDeepSORT.yml
 gpu-id=0
 display-tracking-id=1

Then run the following command line to dump Reid.

USE_NEW_NVSTREAMMUX=yes ./deepstream-app -c /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt 

Except for the difference in object id, Re-ID vector embeddings are similar

You also refer to this link.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvtracker.html#re-id-feature-output

Yes, I added the code below to the " bindtrackermeta.cpp

        py::class_<NvDsReidTensorBatch>(m, "NvDsReidTensorBatch",
                                          pydsdoc::trackerdoc::NvDsReidTensorBatchDoc::descr)
                .def(py::init<>())
                .def_readwrite("featureSize",
                               &NvDsReidTensorBatch::featureSize)
                .def_readwrite("numFilled", &NvDsReidTensorBatch::numFilled)
                .def_readonly("ptr_host", &NvDsReidTensorBatch::ptr_host)
                .def_readonly("ptr_dev", &NvDsReidTensorBatch::ptr_dev)
                .def_readonly("priv_data", &NvDsReidTensorBatch::priv_data)

                .def("get_features", [](NvDsReidTensorBatch &self) -> py::array {
                        auto dtype = py::dtype(py::format_descriptor<float>::format());
                        return py::array(dtype,
                                         {self.numFilled, self.featureSize},
                                         {sizeof(float) * self.featureSize, sizeof(float)},
                                         self.ptr_host);

                    },
                     py::return_value_policy::reference,
                     pydsdoc::trackerdoc::NvDsReidTensorBatchDoc::get_features)

                .def("cast",
                     [](void *data) {
                         return (NvDsReidTensorBatch *) data;
                     },
                     py::return_value_policy::reference,
                     pydsdoc::trackerdoc::NvDsReidTensorBatchDoc::cast);

I think ReID tensor is fine, but the problem is reid index.

In “deepstream_app.c”, reidInd can be obtained by dereferencing user_meta_data as follows.

gint reidInd = *((int32_t *) (user_meta->user_meta_data));

In python, I got reidInd like

reidInd = ctypes.cast(pyds.get_ptr(user_meta.user_meta_data), ctypes.POINTER(ctypes.c_int32).contents.value

When I set USE_NEW_NVSTREAMMUX=yes, the ReID index is not unique in the batchmeta.

# [(Object ID, ReID index), ...] NvDsReidTensorBatch.numFilled
[(13, 7), (12, 6), (9, 4), (11, 5), (5, 2), (7, 3), (2, 1), (0, 0), (23, 7), (22, 6), (20, 4), (19, 3), (21, 5), (18, 2), (17, 1), (16, 0)] 8
[(24, 0)] 1
[(28, 3), (27, 2), (26, 1), (25, 0)] 4
[(30, 0)] 1
[(31, 0), (32, 0)] 1
[(24, 0), (30, 0)] 1
[(33, 0)] 1
[(34, 0)] 1
[(24, 0), (0, 0)] 1
[(30, 0), (28, 1), (27, 0)] 3
[(29, 0)] 1
[(35, 0)] 1
[(15, 9), (14, 8), (13, 7), (9, 4), (5, 2), (11, 5), (12, 6), (7, 3), (2, 1), (0, 0), (8, 14), (10, 15), (6, 13), (4, 12), (1, 10), (3, 11)] 16
[(33, 0), (36, 0), (16, 1), (24, 1)] 2
[(35, 0), (36, 0), (37, 1)] 2
[(8, 4), (10, 5), (6, 3), (4, 2), (1, 0), (3, 1), (23, 7), (22, 6), (17, 1), (19, 3), (20, 4), (18, 2), (16, 0), (21, 5)] 8
[(27, 0), (28, 1), (39, 2), (5, 2), (2, 1), (0, 0)] 3
[(30, 0), (29, 0)] 1
[(37, 0)] 1
[(35, 0)] 1
[(32, 0), (39, 0)] 1
[(19, 3), (23, 7), (22, 6), (17, 1), (20, 4), (21, 5), (18, 2), (16, 0), (15, 9), (14, 8), (13, 7), (9, 4), (12, 6), (11, 5), (5, 2), (7, 3), (2, 1), (0, 0)] 10
[(1, 0), (33, 0)] 1
[(25, 0), (30, 0)] 1
[(8, 4), (10, 5), (6, 3), (4, 2), (3, 1), (1, 0)] 6
[(0, 0), (24, 0)] 1
[(37, 1), (29, 0)] 2
[(36, 0), (39, 0)] 1
[(8, 4), (10, 5), (6, 3), (4, 2), (1, 0), (3, 1), (22, 6), (23, 7), (17, 1), (19, 3), (20, 4), (21, 5), (18, 2), (16, 0)] 8
[(35, 0), (36, 0)] 1
[(1, 0), (33, 0)] 1
[(5, 2), (0, 0), (2, 1), (28, 2), (27, 1), (25, 0)] 3
[(37, 0)] 1
[(35, 0)] 1
[(36, 0), (39, 6), (19, 10), (22, 13), (23, 14), (20, 11), (17, 8), (21, 12), (18, 9), (16, 7), (8, 4), (10, 5), (6, 3), (4, 2), (3, 1), (1, 0)] 15
[(15, 8), (14, 7), (12, 6), (9, 4), (11, 5), (7, 3), (5, 2), (0, 0), (2, 1)] 9
[(43, 0), (44, 0)] 1
[(47, 0)] 1
[(35, 0)] 1
[(36, 0)] 1
[(10, 4), (6, 2), (8, 3), (4, 1), (3, 0), (22, 5), (19, 2), (23, 6), (21, 4), (20, 3), (18, 1), (17, 0)] 7
[(15, 9), (12, 6), (14, 8), (9, 4), (13, 7), (11, 5), (5, 2), (7, 3), (0, 0), (2, 1), (48, 0)] 10
[(29, 0), (45, 1)] 2
[(37, 0)] 1
[(35, 0)] 1
[(36, 0), (19, 2), (23, 6), (22, 5), (20, 3), (18, 1), (21, 4), (17, 0)] 7
[(8, 3), (10, 4), (6, 2), (4, 1), (3, 0)] 5
[(15, 9), (14, 8), (12, 6), (7, 3), (9, 4), (11, 5), (5, 2), (2, 1), (13, 7), (0, 0), (33, 0)] 10
[(30, 0), (48, 0)] 1
[(29, 0), (45, 1)] 2
[(0, 0), (2, 1), (24, 1), (48, 0)] 2
[(50, 2), (37, 3), (41, 4), (29, 0), (45, 1)] 5
[(36, 0)] 1
[(8, 4), (10, 5), (6, 3), (4, 2), (3, 1), (1, 0)] 6
[(17, 0), (24, 0)] 1
[(23, 6), (19, 2), (18, 1), (22, 5), (20, 3), (21, 4), (17, 0)] 7
[(15, 9), (14, 8), (12, 6), (0, 0), (7, 3), (5, 2), (2, 1), (11, 5), (9, 4), (13, 7), (10, 4), (8, 3), (6, 2), (4, 1), (3, 0), (51, 5), (52, 10), (33, 11)] 12

1.This is completely different from the question you mentioned above

These are two different levels of Re-ID vectors
For NvDsReidTensorBatch

NvDsBatchMeta
    -> batch_user_meta_list (find type == NVDS_TRACKER_BATCH_REID_META)

For NVDS_TRACKER_OBJ_REID_META

NvDsBatchMeta
   --> NvFrameMeta
     --> NvObjectMeta
          -> obj_user_meta_list (find type == NVDS_TRACKER_OBJ_REID_META)

This is not reidInd, it is a meaningless value.

Sorry for confusion.

I tested both of DS 6.3 and DS 7.0, but the situation is quite similar.
In DS 6.3, according to deepstream_app.c, I can get ReID embedding for each object as follows.

static void
write_reid_track_output (AppCtx * appCtx, NvDsBatchMeta * batch_meta)
{
  if (!appCtx->config.reid_track_dir_path)
    return;

  gchar reid_file[1024] = { 0 };
  FILE *reid_params_dump_file = NULL;
  /** Find batch reid tensor in batch user meta. */
  NvDsReidTensorBatch *pReidTensor = NULL;
  for (NvDsUserMetaList *l_batch_user = batch_meta->batch_user_meta_list; l_batch_user != NULL;
      l_batch_user = l_batch_user->next) {
    NvDsUserMeta *user_meta = (NvDsUserMeta *) l_batch_user->data;
    if (user_meta && user_meta->base_meta.meta_type == NVDS_TRACKER_BATCH_REID_META) {
      pReidTensor = (NvDsReidTensorBatch *) (user_meta->user_meta_data);
    }
  }

  /** Save the reid embedding for each frame. */
  for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) l_frame->data;

    /** Create dump file name. */
    guint stream_id = frame_meta->pad_index;
    g_snprintf (reid_file, sizeof (reid_file) - 1,
        "%s/%02u_%03u_%06lu.txt", appCtx->config.reid_track_dir_path,
        appCtx->index, stream_id, (gulong) frame_meta->frame_num);
    reid_params_dump_file = fopen (reid_file, "w");
    if (!reid_params_dump_file)
      continue;

    if (!pReidTensor)
      continue;

    /** Save the reid embedding for each object. */
    for (NvDsMetaList * l_obj = frame_meta->obj_meta_list; l_obj != NULL;
        l_obj = l_obj->next) {
      NvDsObjectMeta *obj = (NvDsObjectMeta *) l_obj->data;
      guint64 id = obj->object_id;

      for (NvDsUserMetaList * l_obj_user = obj->obj_user_meta_list; l_obj_user != NULL;
          l_obj_user = l_obj_user->next) {

        /** Find the object's reid embedding index in user meta. */
        NvDsUserMeta *user_meta = (NvDsUserMeta *) l_obj_user->data;
        if (user_meta && user_meta->base_meta.meta_type == NVDS_TRACKER_OBJ_REID_META
            && user_meta->user_meta_data) {

          gint reidInd = *((int32_t *) (user_meta->user_meta_data));
          if (reidInd >= 0 && reidInd < (gint)pReidTensor->numFilled) {
            fprintf (reid_params_dump_file, "%lu", id);
            for (guint ele_i = 0; ele_i < pReidTensor->featureSize; ele_i++) {
              fprintf (reid_params_dump_file, " %f",
                pReidTensor->ptr_host[reidInd * pReidTensor->featureSize + ele_i]);
            }
            fprintf (reid_params_dump_file, "\n");
          }
        }
      }
    }
    fclose (reid_params_dump_file);
  }
}

So, I implemented the above code using Python as follows.
First, I got NvDsReidTensorBatch from BatchMeta,

def caps_src_pad_buffer_probe(unused_pad: Gst.Pad, info: Gst.PadProbeInfo, user_data: Any) -> Gst.PadProbeReturn:
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        logger.error("Unable to get Gst.Buffer ")
        return

    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    reidFeature: np.ndarray = None

    l_user_meta = batch_meta.batch_user_meta_list
    while l_user_meta is not None:
        try:
            user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
        except StopIteration:
            break

        if user_meta.base_meta.meta_type == pyds.NvDsMetaType.NVDS_TRACKER_BATCH_REID_META:
            reidTensor = pyds.NvDsReidTensorBatch.cast(user_meta.user_meta_data)
            reidFeatures = reidTensor.get_features()

        try:
            l_user_meta = l_user_meta.next
        except StopIteration:
            break

Then, I got a ReID feature vector as follows.

    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        l_obj = frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break

            l_user_meta = obj_meta.obj_user_meta_list
            while l_user_meta is not None:
                try:
                    user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
                except StopIteration:
                    break

                if user_meta.base_meta.meta_type == pyds.NvDsMetaType.NVDS_TRACKER_OBJ_REID_META and user_meta.user_meta_data:
                    reidInd = ctypes.cast(
                        pyds.get_ptr(user_meta.user_meta_data), ctypes.POINTER(ctypes.c_int32)
                    ).contents.value
                    if reidInd >= 0 and reidInd < reidTensor.numFilled:
                        feature = reidFeatures[reidInd, :]

                try:
                    l_user_meta = l_user_meta.next
                except StopIteration:
                    break

            try:
                l_obj = l_obj.next
            except StopIteration:
                break

        try:
            l_frame = l_frame.next
        except StopIteration:
            break

I checked the reidInd values for each batch and found that reidInd was duplicated when USE_NEW_NVSTREAMMUX=yes

Is there something I misunderstood?

I think I know the cause of this problem. Let’s forget about python-bindings first and make C code work fine.

These codes worked in DS-6.3, but unfortunately their behavior changed in DS-7.0.
This is why I can’t reproduce it. See the write_reid_track_output implementation in DS-7.0.

Please use DS-7.0, you can read NvDsObjReid directly from NVDS_TRACKER_OBJ_REID_META without parsing NvDsReidTensorBatch. This way you don’t have to get reidInd.

In addition, if there are bugs, we may not fix them on DS-6.3

As I posted first, DS 7.0 also produce wierd vector embedding.

I also made python binding for DS7.0 as you mentioned.
And I tried to get reid vector emeddings for DS 7.0 as follows.

if user_meta.base_meta.meta_type == pyds.NvDsMetaType.NVDS_TRACKER_OBJ_REID_META:
    objReid = pyds.NvDsObjReid.cast(user_meta.user_meta_data)

    if objReid and objReid.ptr_host != 0 and objReid.featureSize > 0:
        feature = objReid.get_feature().copy()

But if I set USE_NEW_NVSTREAMMUX=yes, I still can’t get the correct ReID vector embedding.

Maybe I should compare the output vectors of the C code for the same input videos and check the distribution of each object_id.

First make sure that the C code is working properly. If there is a problem with the corresponding python code, I think it may be that the python bindings has encountered issues.

I tested C code and it was working properly.

I looked at the NvUserMeta.user_meta_data value as follows.

def some_src_pad_buffer_probe(unused_pad: Gst.Pad, info: Gst.PadProbeInfo, user_data: Any) -> Gst.PadProbeReturn:
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))

    # get reidTensor of NvDsReidTensorBatch type

    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        l_obj = frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break

            l_user_meta = obj_meta.obj_user_meta_list
            while l_user_meta is not None:
                try:
                    user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
                except StopIteration:
                    break

                if (
                    user_meta.base_meta.meta_type == pyds.NvDsMetaType.NVDS_TRACKER_OBJ_REID_META
                    and user_meta.user_meta_data
                ):
                    reidInd = ctypes.cast(
                        pyds.get_ptr(user_meta.user_meta_data), ctypes.POINTER(ctypes.c_int32)
                    ).contents.value
                    if reidInd >= 0 and reidInd < reidTensor.numFilled:
                        print(f"stream: {frame_meta.pad_index} frame: {frame_meta.frame_num}, {user_meta.base_meta}.{user_meta.user_meta_data} {reidInd}", end=None)
   ...
   # ommitted for the simplicity
   ...

When I unset USE_NEW_NVSTREAMMUX, python binding also working perfectly as I expected.

Every Re-id tensor’s index gives a different value within a batch meta.

Then I set USE_NEW_NVSTREAMMUX=yes, results are as follows:

Within batch meta - 1 tensor(s) on NvDsReidTensorBatch
stream: 5 frame: 174, <pyds.NvDsBaseMeta object at 0x7f9ddc0c23b0>.<capsule object NULL at 0x7fa0c0ba1d50> 0
stream: 0 frame: 172, <pyds.NvDsBaseMeta object at 0x7f9ddc0c20b0>.<capsule object NULL at 0x7fa0c0bc0570> 0

Within batch meta - 9 tensor(s) on NvDsReidTensorBatch
stream: 2 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c2330>.<capsule object NULL at 0x7fa0c0ba1d50> 8
stream: 2 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c21b0>.<capsule object NULL at 0x7fa0c0ba1d50> 7
stream: 2 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c2330>.<capsule object NULL at 0x7fa0c0ba1d50> 5
stream: 2 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c21b0>.<capsule object NULL at 0x7fa0c0ba1d50> 3
stream: 2 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c2330>.<capsule object NULL at 0x7fa0c0ba1d50> 0
stream: 2 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c21f0>.<capsule object NULL at 0x7fa0c0ba1d50> 6
stream: 2 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c2330>.<capsule object NULL at 0x7fa0c0ba1d50> 4
stream: 2 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c21f0>.<capsule object NULL at 0x7fa0c0ba1d50> 2
stream: 2 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c2330>.<capsule object NULL at 0x7fa0c0ba1d50> 1
stream: 3 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c2330>.<capsule object NULL at 0x7fa0c0bc0510> 4
stream: 3 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c25b0>.<capsule object NULL at 0x7fa0c0bc0510> 3
stream: 3 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c2330>.<capsule object NULL at 0x7fa0c0bc0510> 2
stream: 3 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c25b0>.<capsule object NULL at 0x7fa0c0bc0510> 1
stream: 3 frame: 189, <pyds.NvDsBaseMeta object at 0x7f9ddc0c2330>.<capsule object NULL at 0x7fa0c0bc0510> 0

Within batch meta - 3 tensor(s) on NvDsReidTensorBatch
stream: 2 frame: 192, <pyds.NvDsBaseMeta object at 0x7f9ddc0c26f0>.<capsule object NULL at 0x7fa0c0ba1d50> 0
stream: 5 frame: 190, <pyds.NvDsBaseMeta object at 0x7f9ddc0c21b0>.<capsule object NULL at 0x7fa0c0ba1d50> 2
stream: 5 frame: 190, <pyds.NvDsBaseMeta object at 0x7f9ddc0c2370>.<capsule object NULL at 0x7fa0c0bc0570> 0
stream: 5 frame: 190, <pyds.NvDsBaseMeta object at 0x7f9ddc0c21b0>.<capsule object NULL at 0x7fa0c0bc0570> 1

Within batch meta - 1 tensor(s) on NvDsReidTensorBatch
stream: 0 frame: 192, <pyds.NvDsBaseMeta object at 0x7f9ddc0c28f0>.<capsule object NULL at 0x7fa0c0bc0510> 0
stream: 2 frame: 198, <pyds.NvDsBaseMeta object at 0x7f9ddc0c26f0>.<capsule object NULL at 0x7fa0c0ba1d50> 0

I have no idea about the cause of this problem.

Did you test using DS-6.3? I modified deepstream_test_3.py and it works normally.

cd /opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-test3

cp ../deepstream-test2/dstest2_tracker_config.txt .

cp /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml .

modify ll-config-file in dstest2_tracker_config.txt to config_tracker_NvDCF_accuracy.yml

run

USE_NEW_NVSTREAMMUX=yes python3 deepstream_test_3.py -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264  file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 

Here is patch

diff --git a/apps/deepstream-test3/deepstream_test_3.py b/apps/deepstream-test3/deepstream_test_3.py
index d81ec92..e1400f9 100755
--- a/apps/deepstream-test3/deepstream_test_3.py
+++ b/apps/deepstream-test3/deepstream_test_3.py
@@ -34,6 +34,7 @@ from common.is_aarch_64 import is_aarch64
 from common.bus_call import bus_call
 from common.FPS import PERF_DATA
 
+import ctypes
 import pyds
 
 no_display = False
@@ -56,6 +57,71 @@ OSD_PROCESS_MODE= 0
 OSD_DISPLAY_TEXT= 1
 pgie_classes_str= ["Vehicle", "TwoWheeler", "Person","RoadSign"]
 
+def tiler_sink_pad_buffer_probe(pad,info,u_data):
+    gst_buffer = info.get_buffer()
+    if not gst_buffer:
+        print("Unable to get GstBuffer ")
+        return
+
+    # Retrieve batch metadata from the gst_buffer
+    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
+    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
+    print(f"---------reidInd--------- ")
+    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
+    l_frame = batch_meta.frame_meta_list
+    while l_frame is not None:
+        try:
+            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
+            # The casting is done by pyds.NvDsFrameMeta.cast()
+            # The casting also keeps ownership of the underlying memory
+            # in the C code, so the Python garbage collector will leave
+            # it alone.
+            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
+        except StopIteration:
+            break
+
+        l_obj=frame_meta.obj_meta_list
+        while l_obj is not None:
+            try:
+                # Casting l_obj.data to pyds.NvDsObjectMeta
+                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
+            except StopIteration:
+                break
+
+            l_user_meta = obj_meta.obj_user_meta_list
+            while l_user_meta is not None:
+                try:
+                    user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
+                except StopIteration:
+                    break
+
+                if (
+                    user_meta.base_meta.meta_type == pyds.NvDsMetaType.NVDS_TRACKER_OBJ_REID_META
+                    and user_meta.user_meta_data
+                ):
+                    # user_meta_data_p = ctypes.c_void_p()
+                    reidInd = ctypes.cast(pyds.get_ptr(user_meta.user_meta_data), ctypes.POINTER(ctypes.c_int32)).contents.value
+                    # if reidInd >= 0 and reidInd < reidTensor.numFilled:
+                    print(f"reidInd stream: {frame_meta.pad_index} frame: {frame_meta.frame_num}, {user_meta.base_meta}.{user_meta.user_meta_data} {reidInd}", end=None)
+                    # print(f"reidInd {reidInd}")
+                try:
+                    l_user_meta=l_user_meta.next
+                except StopIteration:
+                    break
+
+            try: 
+                l_obj=l_obj.next
+            except StopIteration:
+                break
+
+        try:
+            
+            l_frame=l_frame.next
+        except StopIteration:
+            break
+    print(f"-------reidInd-end----------")
+    return Gst.PadProbeReturn.OK
+
 # pgie_src_pad_buffer_probe  will extract metadata received on tiler sink pad
 # and update params for drawing rectangle, object information etc.
 def pgie_src_pad_buffer_probe(pad,info,u_data):
@@ -265,6 +331,10 @@ def main(args, requested_pgie=None, config=None, disable_probe=False):
     if not pgie:
         sys.stderr.write(" Unable to create pgie :  %s\n" % requested_pgie)
 
+    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
+    if not tracker:
+        sys.stderr.write(" Unable to create tracker \n")
+
     if disable_probe:
         # Use nvdslogger for perf measurement instead of probe function
         print ("Creating nvdslogger \n")
@@ -306,7 +376,7 @@ def main(args, requested_pgie=None, config=None, disable_probe=False):
                 sys.stderr.write(" Unable to create nv3dsink \n")
         else:
             print("Creating EGLSink \n")
-            sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
+            sink = Gst.ElementFactory.make("fakesink", "nvvideo-renderer")
             if not sink:
                 sys.stderr.write(" Unable to create egl sink \n")
 
@@ -317,10 +387,10 @@ def main(args, requested_pgie=None, config=None, disable_probe=False):
         print("At least one of the sources is live")
         streammux.set_property('live-source', 1)
 
-    streammux.set_property('width', 1920)
-    streammux.set_property('height', 1080)
+    # streammux.set_property('width', 1920)
+    # streammux.set_property('height', 1080)
     streammux.set_property('batch-size', number_sources)
-    streammux.set_property('batched-push-timeout', 4000000)
+    # streammux.set_property('batched-push-timeout', 4000000)
     if requested_pgie == "nvinferserver" and config != None:
         pgie.set_property('config-file-path', config)
     elif requested_pgie == "nvinferserver-grpc" and config != None:
@@ -341,8 +411,31 @@ def main(args, requested_pgie=None, config=None, disable_probe=False):
     tiler.set_property("height", TILED_OUTPUT_HEIGHT)
     sink.set_property("qos",0)
 
+    #Set properties of tracker
+    config = configparser.ConfigParser()
+    config.read('dstest2_tracker_config.txt')
+    config.sections()
+
+    for key in config['tracker']:
+        if key == 'tracker-width' :
+            tracker_width = config.getint('tracker', key)
+            tracker.set_property('tracker-width', tracker_width)
+        if key == 'tracker-height' :
+            tracker_height = config.getint('tracker', key)
+            tracker.set_property('tracker-height', tracker_height)
+        if key == 'gpu-id' :
+            tracker_gpu_id = config.getint('tracker', key)
+            tracker.set_property('gpu_id', tracker_gpu_id)
+        if key == 'll-lib-file' :
+            tracker_ll_lib_file = config.get('tracker', key)
+            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
+        if key == 'll-config-file' :
+            tracker_ll_config_file = config.get('tracker', key)
+            tracker.set_property('ll-config-file', tracker_ll_config_file)
+
     print("Adding elements to Pipeline \n")
     pipeline.add(pgie)
+    pipeline.add(tracker)
     if nvdslogger:
         pipeline.add(nvdslogger)
     pipeline.add(tiler)
@@ -353,7 +446,9 @@ def main(args, requested_pgie=None, config=None, disable_probe=False):
     print("Linking elements in the Pipeline \n")
     streammux.link(queue1)
     queue1.link(pgie)
-    pgie.link(queue2)
+    #pgie.link(queue2)
+    pgie.link(tracker)
+    tracker.link(queue2)
     if nvdslogger:
         queue2.link(nvdslogger)
         nvdslogger.link(tiler)
@@ -380,6 +475,11 @@ def main(args, requested_pgie=None, config=None, disable_probe=False):
             # perf callback function to print fps every 5 sec
             GLib.timeout_add(5000, perf_data.perf_print_callback)
 
+    tilersinkpad = tiler.get_static_pad("sink")
+    if not tilersinkpad:
+        sys.stderr.write(" Unable to get sink pad of tiler \n")
+    tilersinkpad.add_probe(Gst.PadProbeType.BUFFER, tiler_sink_pad_buffer_probe, 0)
+
     # List the sources
     print("Now playing...")
     for i, source in enumerate(args):

1 Like

Here is result

---------reidInd--------- 
reidInd stream: 0 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efef0>.<capsule object NULL at 0x7fd0c48dc090> 0
reidInd stream: 0 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efef0>.<capsule object NULL at 0x7fd0c48dc090> 1
reidInd stream: 0 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efef0>.<capsule object NULL at 0x7fd0c48dc090> 3
reidInd stream: 0 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efef0>.<capsule object NULL at 0x7fd0c48dc090> 2
reidInd stream: 0 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efef0>.<capsule object NULL at 0x7fd0c48dc090> 5
reidInd stream: 0 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efef0>.<capsule object NULL at 0x7fd0c48dc090> 6
reidInd stream: 0 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efef0>.<capsule object NULL at 0x7fd0c48dc090> 4
reidInd stream: 0 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48f8170>.<capsule object NULL at 0x7fd0c48dc090> 7
reidInd stream: 0 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48f8170>.<capsule object NULL at 0x7fd0c48dc090> 9
reidInd stream: 0 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48f8170>.<capsule object NULL at 0x7fd0c48dc090> 8
reidInd stream: 1 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efbf0>.<capsule object NULL at 0x7fd0c48dc090> 10
reidInd stream: 1 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efbf0>.<capsule object NULL at 0x7fd0c48dc090> 11
reidInd stream: 1 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efbf0>.<capsule object NULL at 0x7fd0c48dc090> 13
reidInd stream: 1 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efbf0>.<capsule object NULL at 0x7fd0c48dc090> 12
reidInd stream: 1 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efbf0>.<capsule object NULL at 0x7fd0c48dc090> 15
reidInd stream: 1 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efbf0>.<capsule object NULL at 0x7fd0c48dc090> 16
reidInd stream: 1 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48f80f0>.<capsule object NULL at 0x7fd0c48dc090> 14
reidInd stream: 1 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48f80f0>.<capsule object NULL at 0x7fd0c48dc090> 17
reidInd stream: 1 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48f80f0>.<capsule object NULL at 0x7fd0c48dc090> 19
reidInd stream: 1 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48f80f0>.<capsule object NULL at 0x7fd0c48dc090> 18
reidInd stream: 2 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efd70>.<capsule object NULL at 0x7fd0c48dc090> 20
reidInd stream: 2 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efd70>.<capsule object NULL at 0x7fd0c48dc090> 21
reidInd stream: 2 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efd70>.<capsule object NULL at 0x7fd0c48dc090> 23
reidInd stream: 2 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efd70>.<capsule object NULL at 0x7fd0c48dc090> 22
reidInd stream: 2 frame: 2, <pyds.NvDsBaseMeta object at 0x7fd0c48efd70>.<capsule object NULL at 0x7fd0c48dc090> 25

By turning on the comments, you can use the legacy nvstreammux test. Their results are similar.

Finally, I found where the problem was coming from.

When I set config-file-path for new nvstreammuxer, it produced redundant values within a batch.

Here is my config file.

[property]
algorithm-type=1
batch-size=7
#max-fps-control disables (=0) and enables (=1)
#throttling of buffers in muxer to roughly achieve
#confligured max-fps setting
max-fps-control=1
overall-max-fps-n=40
overall-max-fps-d=1
overall-min-fps-n=20
overall-min-fps-d=1
max-same-source-frames=2

Thank you for your kind support and patience.

Currently I have 7 rtsp inputs with different FPS. (20 - 30 FPS)

How do I maximize performance in this case?

You can open a new topic to discuss performance issues.

Here is a FAQ for performance tuning, you can refer it.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.