Gst-nvtracker

• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version DeepStream SDK 6.0
• JetPack Version (valid for Jetson only) JetPack 4.6
• TensorRT Version TensorRT 8.0.1

I am currently working with deepstream on Jetson Xavier NX with Gst-nvtracker.
The detector sometimes does not detect the object between frames.
However, the tracker works fine and assigns the correct object ID after the empty frame.

For example:

Frame[0]: ID=0, left=20, top=10
Frame[1]:
Frame[2]: ID=0, left=22, top=30
Frame[3]: ID=0, left=23, top=40

How can I get the peak location of the correlation response in Frame[1], as shown in the documentation ?

In “deepstream_app.c” I only get tracker output if a detection is present.

static void
write_kitti_output (AppCtx * appCtx, NvDsBatchMeta * batch_meta)
{
...
guint64 id = obj->object_id;
float left = obj->tracker_bbox_info.org_bbox_coords.left;
float top = obj->tracker_bbox_info.org_bbox_coords.top;
...
}

I asked this question in another topic that was closed. I apologize.
To be more specific. It’s not a software bug. It’s a general question about deepstream. The documentation shows a purple x indicating the center of detector bboxes. The yellow cross + shows the peak location of the correlation response. How do I get the coordinates of the yellow cross + in each frame.

Can below help you?

NVIDIA DeepStream SDK API Reference: Tracker Metadata

Please check the past-frame data info in the doc. You can retrieve the tracking data for the missed detections.

Thanks. It works well once the object has been newly detected. Then the object position in the missed frames is assigned retrospectively. Is there a way to get the tracker’s prediction in the future rather than afterwards? Or do I need to implement an additional Kalman filter to get predictions?

Do you mean you want to get “past-frame” data early?

No, this is not true. The missed objects are being tracked in the background, which is called as Shadow Tracking. I would recommend you to review this doc: Gst-nvtracker — DeepStream 6.3 Release documentation

Tracker prediction is an internal data right now, so it is not exposed in the metadata. Also the KF params that makes frame-to-frame prediction may not work well for your usecase which you need prediction. So, it would be better for you to make a prediction using your own predictor.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.