I am using Deepstream Python SDK. I want to track objects while they are in the frame. I am using RTSP Stream. I am using deepstream-test3 app as a reference. I’m more concerned with tracking People. I’m using the default resnet model as of now. Here are some of the steps I tried already for object tracking.
I combined deepstream-test2 and deepstream-test3 to use nvtrack and then accessing object id from NvDsObjectMeta->object_id . This sometimes gives different id to same person. How to make this accurate and more robust. I’m using default configurations.
I tried deepstream-imagedata-multistream app to extract the frame using . This gives me flexibility to use my own tracking algorithms using openCV. However this method is very slow and I am not able to get real-time results. Is it because this program doesn’t utilise the GPU on jetson Nano? I referred this https://docs.nvidia.com/metropolis/deepstream/python-api/Methods/methodsdoc.html#get-nvds-buf-surface
Please guide me towards what path should I take. I want highly accurate tracker running on Jetson Nano.
• Hardware Platform: Jetson Nano
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only): 4.4
• TensorRT Version: 7.1.0