• Hardware Platform (Jetson / GPU) AGX Orin • DeepStream Version 6.1
My main goal is to evaluate tracked objects speed ((bbox center position frame_n-1 - bbox center position frame_n)/(time_between_frame n-1 and n)). To do so, I am using the NvDCF Tracker with enable past frame =1. To access the info of the past frames I am following deepstream-test-2.py. To validate that I am getting the proper information from the past frames, I would like to display every past frame objects center positions in the current frame. Here is the issues I am getting.
In the NvDsBatchMeta object, there is NvDsFrameMetaList which can be access with frame_meta_list and there is NvDsUserMetaList which can be access with batch_user_meta_list. The NvDsPastFrameObjBatch is part of NvDsUserMetaList and not NvDsFrameMetaList. Therefore, how do we know which frame in NvDsFrameMetaList is associated to a specific NvDsPastFrameObjList ? In other words if I have this code from deepstream-test-2.py :
In the last line of the code what should be the frame_meta ? I know it is probably not the right way to do it but I can’t figure it out right now. I tried to iterate batch_user_meta_list inside each frame from frame_meta_list, but it does not work and the circles disappear all the time and seems to be randomly displayed.
I know it’s possible to do it since it was done in this video from Nvidia and I would like to do exactly that : NVIDIA DeepStream Technical Deep Dive : Multi-Object Tracker - YouTube
Is it possible to get a code example from this ?
How in the video they were able to display more than 16 circles ? It seems that the limit is set to 16.
Can you refer /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-app/deepstream_app.c for how to process “past-frame”.
For display more than 16 circles, you can nvds_acquire_display_meta_from_pool() one more display meta from pool?
Deer Kesong, thank you for your reply. I guess you are referring to the write_kitti_past_track_output function. However, I don’t understand how this answers my question 1. From my understanding, it would just generate a bunch of files with past frames informations. What I would like to do is display the past frames center bboxes on the most recent frame just like in the Nvidia video linked in my question. I don’t think it would be the most efficient way to generate a bunch of files and then read them while the information is already stored in metadata. In deepstream-test-2.py, they do process past frames, that is not my question (really similar to the write_kitti_past_track_output but without the kitti part). Furthermore, while I could do it in c, I would prefer python if possible. As for my question 2, I will take a deeper look at your reply but could you provide an example? Thanks again
If you need apply past frame meta data onto the video. you need delay the video output as the past frame meta data will be added to meta data after the object be tracked again.
What do you mean delay the video ? If an object is tracked for let’s say 5 frames. If i’m at frame 5, I should have the past frames information (1,2,3,4).why would I need to delay ? I know about late activation if it’s what you mean but my main goal is not display the first few frames of an object but to do just like in the video. Out of curiosity how would someone delay the video ?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
“past frame” meta data will send to app at frame 5, “past frame” meta data will include the Bbox info for 1/2/3/4 frame. Hope you can get the 1/2/3/4 video frame and apply the “past frame” meta data onto it.