Can you share more details of your use case? reid-track-output-dir is for storing Re-ID features in each frame into text files. You don’t need it if send Re-ID features to cloud or use Re-ID features in Python application.
Yes, of course.
So I would like to detect and track objects at two different physical locations, and I would like to use the Re-ID features to establish a possible correspondence between the objects at the two locations.
Is it possible to extract these features from one place and transfer it to another? If so, how can I access these?
Sorry, I do not know what MTMC means in this context. Could you elaborate?
Yes, I have some objects that have been seen previously by a different camera, and I would like to pair these detections together by their Re-ID ‘fingerprints’.
Thank you for the answer @kesong.
I have asked for access to MTMC, and finally received it today. Looking through the code, it seems that:
First of all, it is in C++. Are there any Python examples around which show how I can access the Re-ID feature tensor?
I have noticed that the code finds the batch reid tensor in batch user meta, via the NVDS_TRACKER_BATCH_REID_META meta type. Searching for it, I found this conversation. I guess my question is really similar: Is there any official Python functionality to retrieve the ReID tensors? (Almost a year have passed since that post, so I am hopeful.)
In MTMC, ReID vector will be send to Multi-camera fusion microservices. Multi-camera fusion microservices will identify the same person cross camera. Is it possible to use MTMC for your project? Can you share more details of your use case?
I understand that in MTMC the ReID vector is sent to the microservices and fused in the cloud.
The issue is that we have (and would like to have in the future as well) a Deepstream Python pipeline and application on the camera, so even if we were to use MTMC, the question persists on how to get and share the ReID vector in Python.
I would rather not share more details about the use case, but the basic analogy is that we have two cameras that see (usually) the same objects with time difference, and we need to pair the tracks of the same objects together.
Your pipeline may look like this
cam01, cam02 ,… → streammux → pgie (person detection) → nvtracker (person tracking) → nvconv → fakesink
You can have a probe/cb function after nvtracker to
get person tracking info (tracking Id, bbox coordinates)
person cropped image (a cap/filter should be placed after nvtracker to convert I420 to RGBA)
have a selection method to select good person cropped image
…
So you should a TrackingManager class that manages those thing, and one tracking Id has a sequence of person cropped images. Then you bring a person embedding model to vectorize (all in one batch) person cropped images to vectors. Next, you save tracking Id info (including person embedding vectors) to a database (Redis, mongodb).
Finally, you should have another Python process to load tracking Id info from the database, and do person re-identification in first come first served manner.
The above pipeline is an offline ReID, you can tailor it to a near real-time fashion.
I have found an old post from @st123439 that I have mentioned before (this) that gives a direction with writing the bindings.
I have written them using the new repo from Nvidia as a base, adding the edits found in the old post, but for some reason, I did not get any activations, and I get
AttributeError: module ‘pyds’ has no attribute ‘NVDS_TRACKER_BATCH_REID_META’.
In the end I have successfully implemented the python bindings, based on the bindings code that I have referenced earlier.
In case someone stumbles onto the same issue, let me share the bindings that work for me, and spare you some time. (I did not start a PR in Github, because it seems like they do not accept contributions.)
Download these files, remove .txt ending where logical
Copy the .cpp files to /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/bindings/src/...
Copy the .h files to /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/bindings/docstrings/...
Copy CMakeLists.txt to /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/bindings/CMakeLists.txt
Copy setup.py to /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/bindings/packaging/setup.py
Follow the official bindings guide for compiling them, except for the wheel installation, that should be this: pip3 install ./pyds-1.1.110-py3-none*.whl
Usage: in a probe use it like this:
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
except StopIteration:
break
l_obj = frame_meta.obj_meta_list
while l_obj is not None:
try:
obj_meta = pyds.glist_get_nvds_object_meta(l_obj.data)
l_user_meta = obj_meta.obj_user_meta_list
while l_user_meta:
try:
user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
user_meta_data = None
if user_meta.base_meta.meta_type == pyds.NvDsMetaType.NVDS_TRACKER_OBJ_REID_META:
pReidObj = pyds.NvDsObjReid.cast(user_meta.user_meta_data)
if pReidObj.featureSize > 0:
# logger.info(f"pReidObj.ptr_host: {pReidObj.ptr_host}") # CPU mem pointer
# logger.info(f"pReidObj.ptr_dev: {pReidObj.ptr_dev}") # GPU mem pointer
fingerprint = pReidObj.ptr_host
@kesong, will this feature be a part of the next DeepStream bindings release?