Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 440.100
• Issue Type( questions, new requirements, bugs) Questions
So I am trying to implement multi camera tracking so thought I would try and use the 360D deepstream demo example. I am running deepstream-test5 app with just 2 cameras shown below to start with.
The idea is to detect a vehicle in camera 1 and track it as it drives into field of view of 2. I Have used the python tracking docker and am sending results from deepstream to the tracking docker. I have used homography to filter camera detections in certain regions and get the lat and long and attach to the object. The tracker runs and initially it will detect the first vehicle and assign a global ID based on that object. The problem is no matter what settings I change the tracker will always match the frame cluster to that same ID, even if the vehicle is 15 minutes later!
The only different between my code and the 360d parking app is the event type that is attached with : moving, parked,entry, exit. What are these used for in the tracker?
Any help or idea would be great, I have tried changing CLUSTER_DIST_THRESH_IN_M and MACTH_MAX_DIST_IN_M but from .5m to 25m it makes no difference.