DeepStream 360D Smart parking application Tracker

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 440.100
• Issue Type( questions, new requirements, bugs) Questions

So I am trying to implement multi camera tracking so thought I would try and use the 360D deepstream demo example. I am running deepstream-test5 app with just 2 cameras shown below to start with.

The idea is to detect a vehicle in camera 1 and track it as it drives into field of view of 2. I Have used the python tracking docker and am sending results from deepstream to the tracking docker. I have used homography to filter camera detections in certain regions and get the lat and long and attach to the object. The tracker runs and initially it will detect the first vehicle and assign a global ID based on that object. The problem is no matter what settings I change the tracker will always match the frame cluster to that same ID, even if the vehicle is 15 minutes later!

The only different between my code and the 360d parking app is the event type that is attached with : moving, parked,entry, exit. What are these used for in the tracker?

Any help or idea would be great, I have tried changing CLUSTER_DIST_THRESH_IN_M and MACTH_MAX_DIST_IN_M but from .5m to 25m it makes no difference.

Any help on this would be awesome. I have tried running the 360D deepstream container to compare the kafka messages and the speed and direction is also 0 for all values in the demo.

I know it has been asked for years now but is there any reason why the aisle and other nvidia plugins in the deepstream parking app are not open sourced? Why not include the source code?

Hi @gabe_ddi,
We are checking this and will get back to you in these two days.
If you can share us a repo, it will be so helpful.

Thanks!

Just checking in if you had a chance to check the tracking code?

In terms of a repo its basically just default deepstream-test5 to a kafka. From the kafka we use the following python code to do the homography to get it in Lat and Long.

from kafka import KafkaConsumer, KafkaProducer
from json import loads,  dumps
import cv2	    # import the OpenCV library						
import numpy as np  # import the numpy library

# provide points from image 1
pts_src = np.array([[935, 857], [524, 410], [766, 340],[1213, 658]])
# corresponding points from image 2 (i.e. (154, 174) matches (212, 80))
pts_dst = np.array([[151.04524617, -33.83544186],[151.04527702,-33.83540254],[151.04529859,-33.83541373],[151.04526829,-33.83545387]])

# provide points from image 1
pts_src2 = np.array([[50, 365], [606, 203], [1010, 343],[50, 632]]) 
# corresponding points from image 2 (i.e. (154, 174) matches (212, 80))
pts_dst2 = np.array([[151.045215, -33.835274],[151.045333,-33.835324],[151.045273,-33.835399],[151.045169,-33.835343]]) 
# calculate matrix H
h, status = cv2.findHomography(pts_src, pts_dst)
h2, status = cv2.findHomography(pts_src2, pts_dst2)

consumer = KafkaConsumer(
    'metromind-raw',
     bootstrap_servers=['localhost:9092'],
     auto_offset_reset='earliest',
     enable_auto_commit=True,
     group_id='my-group1',
     value_deserializer=lambda x: loads(x.decode('utf-8')))
    
producer = KafkaProducer(bootstrap_servers=['localhost:9092'],
                         value_serializer=lambda x: 
                         dumps(x).encode('utf-8'))

for message in consumer:
    message = message.value
    # print("{}".format(message))
    if message["object"] is not None:
        if message["sensor"]["id"] is not None:
            if message["sensor"]["id"] == "VSP2":
                h = h2
        # print(message["object"]["bbox"])
        BoundingBox = message["object"]["bbox"]
        # Getting centroid of the detected bounding box
        vehicleTopLeftX = BoundingBox["topleftx"]
        vehicleTopLeftY = BoundingBox["toplefty"]
        vehicleBottomRightX = BoundingBox["bottomrightx"]
        vehicleBottomRightY = BoundingBox["bottomrighty"]
        CentroidX = (vehicleTopLeftX + vehicleBottomRightX) / 2
        CentroidY = (vehicleTopLeftY + vehicleBottomRightY) / 2
        # print("Vehicle in car Park with ID: {}, has Centroid: {},{}".format(message["messageid"], CentroidX, CentroidY))
        # provide a point you wish to map from image 1 to image 2
        a = np.array([[CentroidX,CentroidY]], dtype='float32')
        a = np.array([a])
        
        # finally, get the mapping
        pointsOut = cv2.perspectiveTransform(a, h)
        message["object"]["coordinate"]["y"] = pointsOut.item(0)
        message["object"]["coordinate"]["x"] = pointsOut.item(1)
        # print (pointsOut[0])
        # print(pointsOut)
        print("Vehicle in car Park with ID: {}, has Centroid in Global Coordinates: {}, {}".format(message["messageid"], pointsOut.item(0), pointsOut.item(1)))

        producer.send('metromind-anomaly', value=message)

Then the rest is the default tracker from the nvidia 360 parking app

@mchi a week since last response, has the team had a chance to look into the tracking application?

@mchi 2 weeks since last response, any update on this?

1 Like

Hi @gabe_ddi,
Sorry for long delay! Internally reviewing this issue, will get back to you very sonn.

Thanks!

1 Like

Hi @gabe_ddi,
Sorry for delay! Please check below reply and let me know if you have further question.
You cannot do it in test5. you need to write your own code for server side application. In case of 360d app, objects in each camera are tracked in their own context. Deepstream will send object id, camera coordinates, global coordinates to analytics docker where multi camera tracking is performed.

Thanks!

Why can I not use test5? I have written my own post processing function in python to take teh raw detection in and do the processing in the same way that the 360D app uses plugins nvasile and nvspot. My issue is not with getting the individual context from the cameras, its in using the camera tracking module. I send the camera context detection and map them to global coordinate frame to be able to associate the data in the tracker. My question is around the tracker configuration settings CLUSTER_DIST_THRESH_IN_M and MACTH_MAX_DIST_IN_M

Hi @gabe_ddi,
Sorry! I mean you cannot compare deepstream-test5 app with 360d app because in 360d Multi camera tracking happens in analytics (docker) server side.
Could you take a look if Questions on 360 Degree Smart Parking Application helps your case .

Thanks!