How to use MTMC for custom objects?

Currently, I’m trying to use the MTMC (metropoli) module for a custom object. I have already trained a custom detection model and a custom tracker based on ResNet 50 just as it is in the documentation.

This detector+tracker for a single-input pipeline worked perfectly for me (using deepstream-app), even managing to re-identify the instance after a few minutes. I want to extend this to a multi-camera mode, so I decided to use MTMC.

I have already changed my pgie, tracker, input files, and objectType in behavior analytics. However, in the main Metropolis interface, it does not show matches in the global ID. Through VST v12.35, I managed to observe that my detector actually works. When running the overlay debug over the box, it shows 2 values; the second value from left to right matches for the same object in the different camera inputs I defined.

I have already read the documentation available at (https://developer.nvidia.com/docs/mdx/dev-guide/text/Multi-Camera_Tracking_toc.html), and my VST output is quite similar to that presented in the documentation using overlay debug for my study classes (https://developer.nvidia.com/docs/mdx/dev-guide/text/media-service/VST_Quickstart_guide.html).

I want to inquire initially about that second value shown in VST when activating the overlay debug; does this value have any relation to the assigned global ID?

My other question is about the steps I should follow to work with MTMC for custom classes (5 custom classes). Currently, I want to work with a class not considered in the Metropolis pipeline.

1 Like