I am using Deepstream Python SDK. I want to track objects while they are in the frame. I am using RTSP Stream. I am using deepstream-test3 app as a reference. I’m more concerned with tracking People. I’m using the default resnet model as of now. Here are some of the steps I tried already for object tracking.
I combined deepstream-test2 and deepstream-test3 to use nvtrack and then accessing object id from NvDsObjectMeta->object_id . This sometimes gives different id to same person. How to make this accurate and more robust. I’m using default configurations.
I tried deepstream-imagedata-multistream app to extract the frame using . This gives me flexibility to use my own tracking algorithms using openCV. However this method is very slow and I am not able to get real-time results. Is it because this program doesn’t utilise the GPU on jetson Nano? I referred this https://docs.nvidia.com/metropolis/deepstream/python-api/Methods/methodsdoc.html#get-nvds-buf-surface
Please guide me towards what path should I take. I want highly accurate tracker running on Jetson Nano.
1
Object ID is assigned by the order an object is detected.
The same ID in different pipeline doesn’t indicates the same object.
So you should check the accuracy with ID consistency rather than ID value.
2.
Would you mind to profile your customized pipeline with nvprof first?
Is it possible to achieve accuracy where I don’t see changes in the objectID for one particular stream until the person disappears from the frame. It would be okay if after they reappears, the object ID might be different. I want consistent object ID until they are in a frame.
Could you help me guide through this? I’ve tried all 3 trackers IOU, NvDCT and KLT.
I will do the profiling soon and will update here.
It is possible.
If a trajectory is discontinuous, the next detected object will be assigned with a new ID.
The discontinuous may occur due to motion blur or false negative of algorithm.
I able to use deepstream_imagedata-multistream to extract frames and use custom algorithms. I am okay with slow performance until I write a cuda implementation. However, I would like to update the nvds_buf_surface to contain modified imagery and use that frame later in the pipeline for further processing and display.
Hi, I’m trying to achieve tracking as well using purely Deepstream, can you share your insights? I’ve tried to access the metadata but it is always null: