Object Tracking using Deepstream Python SDK

I am using Deepstream Python SDK. I want to track objects while they are in the frame. I am using RTSP Stream. I am using deepstream-test3 app as a reference. I’m more concerned with tracking People. I’m using the default resnet model as of now. Here are some of the steps I tried already for object tracking.

  1. I combined deepstream-test2 and deepstream-test3 to use nvtrack and then accessing object id from NvDsObjectMeta->object_id . This sometimes gives different id to same person. How to make this accurate and more robust. I’m using default configurations.

  2. I tried deepstream-imagedata-multistream app to extract the frame using . This gives me flexibility to use my own tracking algorithms using openCV. However this method is very slow and I am not able to get real-time results. Is it because this program doesn’t utilise the GPU on jetson Nano? I referred this https://docs.nvidia.com/metropolis/deepstream/python-api/Methods/methodsdoc.html#get-nvds-buf-surface

Please guide me towards what path should I take. I want highly accurate tracker running on Jetson Nano.

• Hardware Platform: Jetson Nano
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only): 4.4
• TensorRT Version: 7.1.0


Object ID is assigned by the order an object is detected.
The same ID in different pipeline doesn’t indicates the same object.
So you should check the accuracy with ID consistency rather than ID value.

Would you mind to profile your customized pipeline with nvprof first?

sudo /usr/local/cuda-10.2/bin/nvprof [your command]

If the bottleneck comes from the OpenCV algorithm, you will need to rewrite it with CUDA for acceleration.


1 Like

Hello, thanks for response.

  1. I should have been more clearer in this. When I said

I mean it gives different id to the same person in the same stream. So, I’m using IP camera using RTSP and when I stand in front of live camera and then change my position, sometimes the object id also changes. The same problem is described in this thread https://forums.developer.nvidia.com/t/how-to-have-persistent-tracking-ids-when-using-deepstream-app-for-people-detection/82222 but I didn’t find answers there.

Is it possible to achieve accuracy where I don’t see changes in the objectID for one particular stream until the person disappears from the frame. It would be okay if after they reappears, the object ID might be different. I want consistent object ID until they are in a frame.

Could you help me guide through this? I’ve tried all 3 trackers IOU, NvDCT and KLT.

  1. I will do the profiling soon and will update here.


It is possible.
If a trajectory is discontinuous, the next detected object will be assigned with a new ID.
The discontinuous may occur due to motion blur or false negative of algorithm.

You can try to adjust the threshold for tracking algorithm based on your use case to see if helps.


@AastaLLL I change some of the parameters and the tracking is better now. Thanks for helping me.


I am in a similar situation as issue 2

I able to use deepstream_imagedata-multistream to extract frames and use custom algorithms. I am okay with slow performance until I write a cuda implementation. However, I would like to update the nvds_buf_surface to contain modified imagery and use that frame later in the pipeline for further processing and display.

How can I do this?


HI bk4,

Please help to open a new topic for your issue. Thanks

Hi, I’m trying to achieve tracking as well using purely Deepstream, can you share your insights? I’ve tried to access the metadata but it is always null:

batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(buffer))
user_meta_list = batch_meta.batch_user_meta_list

user_meta_list always returns null