Retrieve RE-ID Features for use during Tracking Algorithm Implementation Phase

Hello,

Is there a way to retrieve the RE-ID features during the tracking algorithm implementation phase? I want to customize the matching algorithm (instead of using the out-of-the-box DeepSort implementation available in DeepStream 6).

The pipeline is as follows:

NVStreammux → PGIE → Tracker → SGIE_1 → SGIE_2 → Nvtiler, osd, etc … → Sink

I want to utilize RE-ID features from a SGIE model for use in a custom tracking algorithm (similar to DeepSORT, but custom implementation). When I attach the SGIE responsible for generating REID features before the tracker (like the following)

PGIE → SGIE (reid-features) → Tracker → …

The re-id features are not accessible via the probe function (cannot be found under user_meta).

I need to extract the RE-ID features via the probe function for additional processing (E.g. send data via message broker). I am guessing since Re-id features cannot be extracted via NvDsObjectMeta, we need to use output-tensor-meta=1 and extract from user_meta_data? Please correct me if I am wrong.

• Hardware Platform: GPU
• DeepStream Version: 6.0.1
• TensorRT Version 8.0.1.6
• NVIDIA GPU Driver Version (valid for GPU only): 510.54

Thank you and best regards,
Jay

1 please refer to deepstream-test2 to test nvtracker, here is nvtracker official doc: Gst-nvtracker — DeepStream 6.0.1 Release documentation

2 here is tracker 's output, not include reid features, please refer to Gst-nvtracker — DeepStream 6.0.1 Release documentation

  • Output
    • Gst Buffer (provided as an input)
    • NvDsBatchMeta (with addition of tracked object coordinates, tracker confidence and object IDs in NvDsObjectMeta )

3 nvtracker will use a low-level tracker library to track the detected objects, It supports any low-level library that implements NvDsTracker API, including own DeepSORT implementations. users can Implement a Custom Low-Level Tracker Library,here is the link: Gst-nvtracker — DeepStream 6.0.1 Release documentation

Thank you for your response.

I have my own custom implementation of the tracker, which is working well in the Gstreamer pipeline.

My question is, when we write a custom gst-nvtracker implementation, can we access re-id features in the same way that the built-in DeepSORT implementation available in gst-nvtracker can?

We want to overwrite the IOU matching algorithm of DeepSORT, while also doing visual feature matching with the RE-ID features.

As you know, deepstream-test2 will use libnvds_nvmultiobjecttracker.so, this so is nvidia 's low-level tracker library, you can custom your Low-Level Tracker Library, please refer to official doc for detail, Gst-nvtracker — DeepStream 6.0.1 Release documentation. you can “Process the given video frame using the user-defined method in the context, and generate outputs” in API NvMOT_Process, so you can access re-id features in API NvMOT_Process.

Thank you for the response.

As far as I know, contextHandle->processFrame works with “input data of the video frames and the detector object information”. Currently, it does not have access to “RE-ID” features, which is provided by a RE-ID model (an SGIE, or some hidden black-box model such as the one provided in the DeepStream DeepSORT tracker).

Is there a way to access RE-ID features from NvMOTProcessParams* params. As far as I know, the only metadata we have access to at this point in the pipeline is the frame meta and the object metatdata added by the PGIE (object detector).

Thank you for your patience.

To tracker plugin, NvMOTProcessParams* params only includes input data, which comes the upstream plugin, so you can’t get RE-ID features in this struct. please refer to nvdstracker.h.
To access RE-ID features, you need custom that NvMOT_Process, do inference and accees RE-ID features, please refer to doc for details.

Thank you very much for the clear response.

I am guessing that one method of accessing RE-ID features is to de-serialize a model (using the TensorRT API) and perform the inference inside of the NvMOT_Process function by passing in the cropped input frame into the de-serialized model. Would this be correct?

If you recommend a better way, that would also be very much appreciated. Thank you!

yes , NvMOT_Process’s second parameter is input data, the third parameter is output data, you can implement your own tracker method in NvMOT_Process, to deepsort method, you need to call TensorRT API to de-serialize model, do inference, then access RE-ID features.

1 Like

Thank you very much for the response. I understand what needs to be done (was double checking to see if there was any better method). Thank you for your time!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.