RE-identification in multiple camera system

Brief : Need to find out if nvtracker re-identification can work on multiple camera system and how.

Hardware details :
• Hardware Platform :Jetson
• DeepStream Version: 6.4-multiarch
• JetPack Version: 5.2
• Issue Type: questions

Hey! I need to find out if I can build a deepstream pipeline that can take multiple camera inputs and perform re-identification of person across cameras. Assuming that all cameras are in perfect synchronisation and correspond to different areas in a warehouse kind of location.
Currently I have been able to run on single feed instance using nvtracker with config,

ll-lib-file: "/opt/nvidia/deepstream/deepstream-6.4/lib/"
ll-config-file: "/opt/nvidia/deepstream/deepstream-6.4/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml"

Please assist, thanks.

Please refer here for multiple camera tracking: NVIDIA Multi-Camera Tracking AI Workflow

Hey! Thanks for the reply, I already checked that out and downloaded the ReIdentificationNet model version: 1.2 from NGC.
Can you please tell me how I can use the model in my current pipeline,

appsrc1 -> nvvideoconvert -> caps_filter -> | nvstreammux -> nvinfer -> nvtracker -> nvvideoconvert -> caps_filter -> appsink
appsrc2 -> nvvideoconvert -> caps_filter -> | 
appsrc3 -> nvvideoconvert -> caps_filter -> | 
appsrc4 -> nvvideoconvert -> caps_filter -> | 

I have 4 sources which I am getting from appsrc. Where do I put the reidentification model in the pipeline and how should I proceed with feature matching?


You should apply the Metropolis Microservices Early Access in: Metropolis Microservices Early Access | NVIDIA Developer
Then you can deploy the MTMC for multiple camera tracking by download the MTMC microservices.

I have already applied and got the early-access. I have downloaded the model file for ReIdentificationNet.
Can you please assist me regarding the above question?

You can run MTMC without any model change based on the guide: Log in | NVIDIA Developer
MTMC only can run on Nvidia dGPU currently.

I am specifically looking for edge-based solution in this. Can you please suggest something else?

Your use case is multiple RTSP camera streamming video to Jetson. You want tracking people accross cameras on Jeston. Can you share more details of your requirement? I will check the requirement internally and feedback here.

Sure thing!
My current use-case involves a restaurant location with 3 cameras,

  • Main seating area
  • Prohibited area
  • Counter area
    And I want to build a pipeline which can perform person-detection using peopleNet model and tracking and re-identification using nvtracker plugin. This will yeild important information such as how long a customer takes after entering into the store to the counter area, and queue-length to counter area etc, so that customer experience can be improved.

In deepstream terms, I am currently running a single camera pipeline which is as follows,

appsrc -> nvvideoconvert -> caps_filter -> nvstreammux -> nvinfer -> nvtracker -> nvvideoconvert -> caps_filter -> appsnk

Move to MMJ forum. I will check internally and feedback here.

We will review the requirement for MTMC on Jetson. Will feedback if any progress.

Okay, thanks for update @kesong

@kesong any update on this?

Here is the latest update for MTMC on Jetson: MTMC microservice for Jetson