Output Re-ID features for tracker

• Hardware Platform (Jetson / GPU) : GPU
• DeepStream Version : 7.0
• TensorRT Version : 8.6.1
• NVIDIA GPU Driver Version (valid for GPU only) : 535.171.04
• Issue Type( questions, new requirements, bugs) : questions

Hi,

I would like to ask how to get the Re-ID feature outputs from a python deepstream app?

I see here how it works for the C++ version, but I could not figure out where can I add the reid-track-output-dir config in the python code.

Thank you for your help in advance!

Can you share more details of your use case? reid-track-output-dir is for storing Re-ID features in each frame into text files. You don’t need it if send Re-ID features to cloud or use Re-ID features in Python application.

Yes, of course.
So I would like to detect and track objects at two different physical locations, and I would like to use the Re-ID features to establish a possible correspondence between the objects at the two locations.

Is it possible to extract these features from one place and transfer it to another? If so, how can I access these?

Hello @kesong,
Are there any updates about this topic?

Do you mean you want tracking object in multiple cameras? Can you refer MTMC?

Sorry, I do not know what MTMC means in this context. Could you elaborate?

Yes, I have some objects that have been seen previously by a different camera, and I would like to pair these detections together by their Re-ID ‘fingerprints’.

Here is the MTMC(Multi Target Multi Camera): NVIDIA Multi-Camera Tracking AI Workflow

Thank you for the answer @kesong.
I have asked for access to MTMC, and finally received it today. Looking through the code, it seems that:

  1. First of all, it is in C++. Are there any Python examples around which show how I can access the Re-ID feature tensor?
  2. I have noticed that the code finds the batch reid tensor in batch user meta, via the NVDS_TRACKER_BATCH_REID_META meta type. Searching for it, I found this conversation. I guess my question is really similar: Is there any official Python functionality to retrieve the ReID tensors? (Almost a year have passed since that post, so I am hopeful.)

Thank you for your help!

1 Like

In MTMC, ReID vector will be send to Multi-camera fusion microservices. Multi-camera fusion microservices will identify the same person cross camera. Is it possible to use MTMC for your project? Can you share more details of your use case?

Thank you for the answer.

I understand that in MTMC the ReID vector is sent to the microservices and fused in the cloud.
The issue is that we have (and would like to have in the future as well) a Deepstream Python pipeline and application on the camera, so even if we were to use MTMC, the question persists on how to get and share the ReID vector in Python.

I would rather not share more details about the use case, but the basic analogy is that we have two cameras that see (usually) the same objects with time difference, and we need to pair the tracks of the same objects together.

Hi @levay ,

Your pipeline may look like this
cam01, cam02 ,… → streammux → pgie (person detection) → nvtracker (person tracking) → nvconv → fakesink
You can have a probe/cb function after nvtracker to

  • get person tracking info (tracking Id, bbox coordinates)
  • person cropped image (a cap/filter should be placed after nvtracker to convert I420 to RGBA)
  • have a selection method to select good person cropped image

So you should a TrackingManager class that manages those thing, and one tracking Id has a sequence of person cropped images. Then you bring a person embedding model to vectorize (all in one batch) person cropped images to vectors. Next, you save tracking Id info (including person embedding vectors) to a database (Redis, mongodb).

Finally, you should have another Python process to load tracking Id info from the database, and do person re-identification in first come first served manner.

The above pipeline is an offline ReID, you can tailor it to a near real-time fashion.

Good luck.

2 Likes

You need add the bindings if you want to get ReID vector in Python.

1 Like

Thank you!
As it seems like that I need to build the bindings for this, we might start going this direction.

I see, thank you. Is there any way to get back to you with the code for the bindings, so that in future versions it will be built-in already?

I have found an old post from @st123439 that I have mentioned before (this) that gives a direction with writing the bindings.
I have written them using the new repo from Nvidia as a base, adding the edits found in the old post, but for some reason, I did not get any activations, and I get

AttributeError: module ‘pyds’ has no attribute ‘NVDS_TRACKER_BATCH_REID_META’.

These are the files I am using (delete the txt extension):
bindnvdsmeta.cpp.txt (27.5 KB)
bindtrackermeta.cpp.txt (6.9 KB)
nvdsmetadoc.h.txt (23.7 KB)
trackermetadoc.h.txt (10.7 KB)

Could you @kesong, or maybe @st123439 provide a hint on what could be wrong with this? (I have outputReidTensor: 1 as well)

Thank you!

Hello @kesong,
Have you managed to take a look at these files?
Thank you!

Hi @kesong,
Please let me know if you’ve had time to take a look at this, our development route is blocked by this issue at the moment.

The REID meta structure has changed between DeepStream 6.4 and DeepStream 7.0 version. You can refer: Wrong and Invalid ID recieved for object while extracting feature embedding using the REID module - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums . So you also need some change for the binding patches.

Thank you for that link @kesong!

In the end I have successfully implemented the python bindings, based on the bindings code that I have referenced earlier.

In case someone stumbles onto the same issue, let me share the bindings that work for me, and spare you some time. (I did not start a PR in Github, because it seems like they do not accept contributions.)

bindnvdsmeta.cpp.txt (27.5 KB)
bindtrackermeta.cpp.txt (8.0 KB)
CMakeLists.txt (4.3 KB)
nvdsmetadoc.h.txt (23.7 KB)
setup.py.txt (822 Bytes)
trackermetadoc.h.txt (11.3 KB)

Steps to make it work:

  1. Download these files, remove .txt ending where logical
  2. Copy the .cpp files to /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/bindings/src/...
  3. Copy the .h files to /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/bindings/docstrings/...
  4. Copy CMakeLists.txt to /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/bindings/CMakeLists.txt
  5. Copy setup.py to /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/bindings/packaging/setup.py
  6. Follow the official bindings guide for compiling them, except for the wheel installation, that should be this: pip3 install ./pyds-1.1.110-py3-none*.whl

Usage: in a probe use it like this:

batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
        except StopIteration:
            break

        l_obj = frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                obj_meta = pyds.glist_get_nvds_object_meta(l_obj.data)
                l_user_meta = obj_meta.obj_user_meta_list

                while l_user_meta:
                    try:
                        user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
                        user_meta_data = None
                        
                        if user_meta.base_meta.meta_type == pyds.NvDsMetaType.NVDS_TRACKER_OBJ_REID_META:
                            pReidObj = pyds.NvDsObjReid.cast(user_meta.user_meta_data)
                            if pReidObj.featureSize > 0:
                                # logger.info(f"pReidObj.ptr_host: {pReidObj.ptr_host}")  # CPU mem pointer
                                # logger.info(f"pReidObj.ptr_dev: {pReidObj.ptr_dev}")  # GPU mem pointer
                                fingerprint = pReidObj.ptr_host

@kesong, will this feature be a part of the next DeepStream bindings release?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.