Can I put nvtracker into nvinfer segmentation pipeline?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello, I currently configured the pipeline with Src - pgie (seg) - nvsegvisual - windowsink. Looking at the input and output configurations (Type), I think I can put nvtracker between pgie and segvisual,

  1. Is this put in
  2. If so, I wonder if the workflow works normally
    thank you

No, there are no detected object for segmentation. This plugin allows the DS pipeline to use a low-level tracker library to track the detected objects with persistent (possibly unique) IDs over time.

Oh, when I put it in myself, it’s not normally done with segment meta.
Then, do nvidia have any plans?

Since sam-track is very impressive, I think it would be nice to come out separately in deepstream.

Thanks for your suggestion and we will discuss that.

Hi @yeongjae1 , could you provide us with the model of this requirement and describe the inputs and outputs of this model? We can try to integrate it in DeepStream.

The inputs of the model are (H, W, C) numpy array → image frame
and outputs are (H, W) numpy array → pred mask

@yuweiw

Please refer to the answer above. Is there anything else you need?

They use DeAOT (Decoupling features in Associating Objects with Transformers) (NeurIPS2022) algorithm for efficient multi-object tracking and propagation.

OK. There are a few questions I would like to know. Are there any trained models for this can be used now? As the outputs are (H,W) pred mask, how is the trackID of the pred mask generated?

You can get pretrained model at [GitHub - z-x-yang/Segment-and-Track-Anything: An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.] page.

And the trackID of the pred mask is generated using Comparing Mask Results (CMR) mechanism which compares the tracking results from DeAOT and the annotation results from SAM in every key-frame. and selects objects from the SAM annotations that are not being tracked by DeAOT.

Okay, we will discuss how to adapt that to our deepstream. If there is any progress, we will inform you in time.

Hi @yeongjae1 , could you help confirm the following points?

  1. You can refer to the link below to describe your whole pipeline in detail. <src, preprocess,pgie,postprocess, tracker…>.
    DS_ref_app_deepstream
  2. You need to convert the pytorch model into the onnx model.
  3. We can try to intergrate the model to the deepstream with a similar pipeline.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.