NVDCF Tracking Parameters Detailed Info

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jeston Nano
• DeepStream Version
5.0.1
• JetPack Version (valid for Jetson only)
4.4

Question and Request for More Information

I am using the NVDCF low level tracker library to track people in a queue using some pre-recorded via segments. I’m using the PeopleNet model and this is working relatively well at detecting people as the move through the queue area. My question is three fold :-

  1. I have noticed that peoples tracker ID’s change fairly frequently as they get partially obscured by other people in the queue. Is there a way in the tracker config to setup the parameters such that once a person is detected and assigned a tracker ID and subsequently get obscured for a short period of time and detected again they are assigned the same tracker ID ? I have read the current documentation and tried adjusting parameters such as maxShadowTrackingAge, earlyTerminationAge, featureImgSizeLevel etc. but these make no difference.
  2. Is there any more detailed documentation on the low level tracker parameter meanings, usage and settings values etc. ?
  3. Is there are way to see the shadow history logs of tracked objects from this library so I can get visibility of what the library is seeing and recording against tracked objects ? This would help enormously in debugging and tuning the tracker config parameters.

Many Thanks for any assistance

We will publish a detailed doc soon.

Past frame data is stored in batch_meta->batch_user_meta_list (shadow tracking), dumped by write_kitti_past_track_output() in deepstream-app/deepstream_app.

Refer following section

One may make the data association policy stricter by increasing the minimum qualifications such as:

  • minMatchingScore4SizeSimilarity
  • minMatchingScore4Iou
  • minMatchingScore4VisualSimilarity

One may consider enabling the instance-awareness to allow the correlation filters learned more discriminatively against the nearby objects.

In case more resources can be used, it would definitely be helpful to use larger feature size (i.e., featureImgSizeLevel) and/or use more visual features (i.e., useColorNames and useHog)

@bcao Is there a detailed doc available yet? :)

1 Like

I only found it:

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvtracker.html#gst-nvtracker