Using Custom Tracker in Deepstream pipeline

•Hardware Platform (GPU) - NVIDIA GeForce RTX 3080
•DeepStream Version -6.2
•TensorRT Version - 8.5.2-1+cuda11.8
•NVIDIA GPU Driver Version- 525.105.17


I am trying to use my custom tracker in Deepstream pipeline. But I cannot find any reference for implementing the library or shared object **(.so extension) ** file for using in deepstream config file. I realize that one can implement the low level tracker by calling the api defined in nvdstracker.h. But my question is

Where do I implement the logic of my tracker ( Eg., calculating error of previous frame and predicting the position in current frame, etc. )
And how do I make deepstream api call my implementation and generate shared object file so that I can add the directory for ll-lib-file under [tracker] section inside deepstream config.

Currently, I made the “nvdstracer.cpp” file under the same directory with nvdstracker.h and import the nvdstracker.h and add api function calls as described in this documentation.

I also uploaded the *nvdstracker.cpp file here.
nvdstracker.cpp (7.9 KB)

I don’t know how to connect the idea of implement my custom tracker login and calling api inside nvdstracer.cpp.
Please, any help or suggestion are sincerely accepted.
Thank you in advanced~

Suppose you can implement your tracker logic in the user-defined context.

 /// Instantiate the user-defined context
 pContext = new NvMOTContext(*pConfigIn, *pConfigResponse);
1 Like

Thank you for your reply~

So, that means, I need to implement my tracker logic inside NvMOTContext class and return the instance into nvdstracker.cpp to apply it?

Can you explain a little bit more detail? Or Is there any sample implementation I can refer to?

Yes, you can implement your tracker logic inside NvMOTContext class. Sorry, no sample implementation.

1 Like

Thank you ~

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.