How to access Re-ID tensor for cross camera re-id

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi I am using python version of Deepstream 6.3

My pipeline is like below

source (0,1) —> streammux → pgie → tracker → tiler → osd sink

I am using custom reid model using nvdeepsort tracker.

As per the documentation i set outputReidTensor : 1

Source 0 and 1 are two camera stream at different viewpoint for the overlapping field of view for road traffic. My pgie is a detector that detects motorcycle and car.

I am able to track the object in both of the stream that gives a unique track id for a object in a stream.

Now i want to use cross-camera reid so that i can assign the track of a vehicle in one stream to the track of the same vehicle in another stream.

My intuition for this was to check the reid feature for the tracks in two stream and assign the track in two streams with the closest distance as same id. Is it possible to do so.

Also, i want to have a buffer of frames in my deepstream app so that if there is an certain anamoly in the scene, we can save the track of the object in a output directory. Please advise how can this be achieved.

DeepStream can’t do cross camera tracking currently. Here is cross camera tracking: Metropolis - Multi-camera Tracking

Thank you for the clarification.

If i want to access the output reid feature for each object, how can i do this??

I can find obj_user_meta_list for each object in obj_meta. What is the proper way to get the feature tensor from this obj_user_meta??

l_obj = frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            
            object_id = obj_meta.object_id
            l_user = obj_meta.obj_user_meta_list
            user_meta = pyds.NvDsUserMeta.cast(l_user.data)

            tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)
            ptr = ctypes.cast(pyds.get_ptr(tensor_meta),
                              ctypes.POINTER(ctypes.c_float))
            features = np.ctypeslib.as_array(ptr, shape=(128,))

I tried this but the features i am getting doent look like the reid feature that i want to extract.

I also tried to debug the base_meta.meta_type for this user meta

l_obj = frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            
            object_id = obj_meta.object_id
            l_user = obj_meta.obj_user_meta_list
            user_meta = pyds.NvDsUserMeta.cast(l_user.data)
            print(user_meta.base_meta.meta_type)

which gives the output:

NvDsMetaType.???

This probe is after the tracker with the following config:

%YAML:1.0

BaseConfig:
  minDetectorConfidence: 0.1
TargetManagement:
  preserveStreamUpdateOrder: 0
  maxTargetsPerStream: 150
  minIouDiff4NewTarget: 0.0602
  minTrackerConfidence: 0.7312
  probationAge: 15
  maxShadowTrackingAge: 60
  earlyTerminationAge: 1
TrajectoryManagement:
  useUniqueID: 0
  reidExtractionInterval : 0
DataAssociator:
  dataAssociatorType: 0
  associationMatcherType: 1
  checkClassMatch: 1
  thresholdMahalanobis: 34.3052
  minMatchingScore4Overall: 0.0231
  minMatchingScore4SizeSimilarity: 0.3104
  minMatchingScore4Iou: 0.3280
  minMatchingScore4ReidSimilarity: 0.6805
  matchingScoreWeight4SizeSimilarity: 0.7103
  matchingScoreWeight4Iou: 0.5429
  matchingScoreWeight4ReidSimilarity: 0.6408
  tentativeDetectorConfidence: 0.0483
  minMatchingScore4TentativeIou: 0.5093
StateEstimator:
  stateEstimatorType: 2
  noiseWeightVar4Loc: 0.0739
  noiseWeightVar4Vel: 0.0097
  useAspectRatio: 1
ReID: # need customization
  reidType: 1
  batchSize: 500
  workspaceSize: 1000
  reidFeatureSize: 128
  reidHistorySize: 100
  inferDims: [3, 128, 128]
  networkMode: 0
  inputOrder: 0 #cwh
  colorFormat: 0
  offsets: [0.0, 0.0, 0.0]
  netScaleFactor: 1.0000
  keepAspc: 1
  #Required for uff model only
  #inputBlobName: input
  #outputBlobName: output
  onnxFile: "/opt/nvidia/deepstream/deepstream-6.3/sources/apps/my_app/ckpt.finetune-pair-biplav-epoch55.onnx"
  outputReidTensor : 1

Please refer write_reid_track_output() in /opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-app/deepstream_app.c

Thanks for the refrence. As i went through the code:

write_reid_track_output (AppCtx * appCtx, NvDsBatchMeta * batch_meta)
{
  if (!appCtx->config.reid_track_dir_path)
    return;

  gchar reid_file[1024] = { 0 };
  FILE *reid_params_dump_file = NULL;
  /** Find batch reid tensor in batch user meta. */
  NvDsReidTensorBatch *pReidTensor = NULL;
  for (NvDsUserMetaList *l_batch_user = batch_meta->batch_user_meta_list; l_batch_user != NULL;
      l_batch_user = l_batch_user->next) {
    NvDsUserMeta *user_meta = (NvDsUserMeta *) l_batch_user->data;
    if (user_meta && user_meta->base_meta.meta_type == NVDS_TRACKER_BATCH_REID_META) {
      pReidTensor = (NvDsReidTensorBatch *) (user_meta->user_meta_data);
    }
  }

  /** Save the reid embedding for each frame. */
  for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) l_frame->data;

    /** Create dump file name. */
    guint stream_id = frame_meta->pad_index;
    g_snprintf (reid_file, sizeof (reid_file) - 1,
        "%s/%02u_%03u_%06lu.txt", appCtx->config.reid_track_dir_path,
        appCtx->index, stream_id, (gulong) frame_meta->frame_num);
    reid_params_dump_file = fopen (reid_file, "w");
    if (!reid_params_dump_file)
      continue;

    if (!pReidTensor)
      continue;

    /** Save the reid embedding for each object. */
    for (NvDsMetaList * l_obj = frame_meta->obj_meta_list; l_obj != NULL;
        l_obj = l_obj->next) {
      NvDsObjectMeta *obj = (NvDsObjectMeta *) l_obj->data;
      guint64 id = obj->object_id;

      for (NvDsUserMetaList * l_obj_user = obj->obj_user_meta_list; l_obj_user != NULL;
          l_obj_user = l_obj_user->next) {

        /** Find the object's reid embedding index in user meta. */
        NvDsUserMeta *user_meta = (NvDsUserMeta *) l_obj_user->data;
        if (user_meta && user_meta->base_meta.meta_type == NVDS_TRACKER_OBJ_REID_META
            && user_meta->user_meta_data) {

          gint reidInd = *((int32_t *) (user_meta->user_meta_data));
          if (reidInd >= 0 && reidInd < (gint)pReidTensor->numFilled) {
            fprintf (reid_params_dump_file, "%lu", id);
            for (guint ele_i = 0; ele_i < pReidTensor->featureSize; ele_i++) {
              fprintf (reid_params_dump_file, " %f",
                pReidTensor->ptr_host[reidInd * pReidTensor->featureSize + ele_i]);
            }
            fprintf (reid_params_dump_file, "\n");
          }
        }
      }
    }
    fclose (reid_params_dump_file);
  }
}

i could find a meta type called NVDS_TRACKER_BATCH_REID_META in c implementation.

But as i used this in the python version of DS implementation as below, i couldn’t find this meta:

def tiler_src_pad_buffer_probe_tracker_test(pad,info, u_data):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_user=batch_meta.batch_user_meta_list #
    while l_user is not None:
        try:
            user_meta= pyds.NvDsUserMeta.cast(l_user.data)
        except StopIteration:
            break
        print(user_meta.base_meta.meta_type)
        if user_meta and user_meta.base_meta.meta_type == pyds.NVDS_TRACKER_BATCH_REID_META:
            pReidTensor = user_meta.user_meta_data
        try:
            l_user=l_user.next
        except StopIteration:
            break
    # if user_meta and user_meta.base_meta.meta_type == NVDS_TRACKER_BATCH_REID_META:
    #     pReidTensor = user_meta.user_meta_data

    return Gst.PadProbeReturn.OK

i could get the following type of basemeta as i print the base_meta.meta_type:

NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META
NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META
NvDsMetaType.NVDS_TRACKER_PAST_FRAME_META
NvDsMetaType.???

But i got the attribute error in the following line:

if user_meta and user_meta.base_meta.meta_type == pyds.NVDS_TRACKER_BATCH_REID_META:

AttributeError: module ‘pyds’ has no attribute ‘NVDS_TRACKER_BATCH_REID_META’

Is it because the pyds object has not integrated this meta_type in wrapper implementation yet??

Yes, seems you need add those Python bindings by yourself as the bindings is open source code.

Is there any instructions available how to add the bindings? It would be great help if you can give me some instruction in which part of the codes do i need to apply changes to!

As i looked into the opensource binding code, i found this :

Will the changes here suffice to add the binding for this meta type?

Regards

You can refer the PAST_FRAME bindings: deepstream_python_apps/bindings/src/bindtrackermeta.cpp at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

I went through the code base, I think we also need to add binding for NVDS_TRACKER_OBJ_REID_META.

Is there any plan from the deepstream developer teaM to include these bindings soon?

If we have some timeline we could plan accordingly for our development.

I am also interested in knowing how can I do this in python. Is there a guide for adding the new bindings?

@st123439 Would you please share with me what you found as a conclusion?

Thanks.

Hi i was able to add the bindings for the required struct and metatypes used in write_reid_track_output() by modifying files in bindings and src.

Please remove .txt extention and replace the file in existing repo with these file. Recompile and produce a new executable and pip3 install that, then you will be able to use this meta in your python code.

bindtrackermeta.cpp.txt (6.3 KB)
trackermetadoc.h.txt (8.0 KB)
bindnvdsmeta.cpp.txt (27.5 KB)
nvdsmetadoc.h.txt (23.7 KB)

Hi @st123439

Thank you for sharing these files!! It is so nice of you.

So, I have to replace these files with the existing ones. And am I recompiling their deepstream container? or do you mean build using cmake?
What about the pip3 install part?

I mean build using cmake for the bindings repo which will give the python library to install.

Please follow here for building the bindings: deepstream_python_apps/bindings at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

Okay, I replaced the file you gave me with the existing one.

I am facing an issue now. Whenever I add “outputReidTensor : 1”, my pipeline stops at the first frame with objects. It is like a freeze in running the pipeline.

I am using the provided model in Nvidia docs.

ReID:
  batchSize: 512
  workspaceSize: 1000
  reidFeatureSize: 512
  reidHistorySize: 100
  inferDims: [3,384, 128]
  networkMode: 1
  # [Input Preprocessing]
  inputOrder: 0
  colorFormat: 0
  offsets: [109.1250, 102.6000, 91.3500]
  netScaleFactor: 0.01742919
  keepAspc: 1
  # [Paths and Names]
  onnxFile: /opt/nvidia/deepstream/deepstream/samples/models/Tracker/ghost_reid.onnx
  modelEngineFile: /opt/nvidia/deepstream/deepstream/samples/models/Tracker/ghost_reid.engine
  #outputReidTensor : 1

I converted the model myself. So, I am sure it works with this batch size.

Do you know what is the problem? @st123439 @kesong

Can you have a try with “deepstream_app”?

I need Python example to try it.
@kesong

Is there an easier way to get the Re-ID tensor for each object?

I can see that the C code does it in two steps. But, if my pipeline runs multi-stream it would be harder to know which tensor belongs to which object.

Did you do that in your code?