DeepSort in Deepstream

**• Hardware Platform (Jetson / GPU):jetson
**• DeepStream Version: 5.0
**• JetPack Version (valid for Jetson only): 4.4
**• TensorRT Version:7.1.3.0

Deepsort is a very popular object track algorithm.
I want to use it in Deepstream.
It seem that I need to custom my nvdsinferinitializeinputlayers and bbox functions.
I have found a tensorrt repos about deepsort in GitHub - kunyao2015/TensorRT_Yolo

What can I do in Deepstream?

I need some help.

You can deploy the model to DS if it can run well using TensorRT.

@bcao Thanks for you reply.

I know about this.
But DeepSort is seem like a tracker algorithm. So I am trying to custom a tracker plugin.

After reading some preview posts. I try to test it. Now, I can get output bbox correctly. I output the detected bbox directly. But no bbox output in the video.

Here is my test code.

#include <math.h>
#include <stdio.h>
#include "nvdstracker.h"
#include <iostream>
#include <glib.h>
#include "nvbufsurface.h"
#include "nvbufsurftransform.h"
#include "gstnvdsmeta.h"
#include "gstnvdsinfer.h"
#include <cstddef>
#include <vector>

#define TRACKER_AGE_INFRAME 120

/**
* The plugin uses this function to query the low-level library’s capabilities and requirements 
* before it starts any processing sessions (contexts) with the library.
**/
NvMOTStatus NvMOT_Query(uint16_t customConfigFilePathSize, char *pCustomConfigFilePath, NvMOTQuery *pQuery)
{
    std::cout << "Exec NvMOT_Query ..." << std::endl;
    pQuery->memType = NVBUF_MEM_DEFAULT;
    pQuery->colorFormats[0] = NVBUF_COLOR_FORMAT_RGBA;
    pQuery->colorFormats[1] = NVBUF_COLOR_FORMAT_RGBA;
    pQuery->colorFormats[2] = NVBUF_COLOR_FORMAT_RGBA;
    pQuery->colorFormats[3] = NVBUF_COLOR_FORMAT_RGBA;

    return NvMOTStatus_OK;
}

/**
* After the query, and before any frames arrive, the plugin must initialize a context with the low-level library
**/
NvMOTStatus NvMOT_Init(NvMOTConfig *pConfigIn, NvMOTContextHandle *pContextHandle, NvMOTConfigResponse *pConfigResponse)
{
    std::cout << "Exec NvMOT_Init ..." << std::endl;
    pConfigResponse->summaryStatus = NvMOTConfigStatus_OK;
    pConfigResponse->computeStatus = NvMOTConfigStatus_OK;
    pConfigResponse->transformBatchStatus = NvMOTConfigStatus_OK;
    pConfigResponse->miscConfigStatus = NvMOTConfigStatus_OK;
    pConfigResponse->customConfigStatus = NvMOTConfigStatus_OK;

    NvMOTContextHandle *ctx = (NvMOTContextHandle *)pConfigIn;
    *pContextHandle = *ctx;

    return NvMOTStatus_OK;
}

NvMOTStatus NvMOT_Process(NvMOTContextHandle contextHandle, NvMOTProcessParams *pParams, NvMOTTrackedObjBatch *pTrackedObjectsBatch)
{
    std::cout << "Exec NvMOT_Process ..." << std::endl;
    NvMOTFrame *fr = pParams->frameList;
    NvMOTObjToTrackList det_obj = fr->objectsIn;
    NvMOTObjToTrack *obj_to_trk = det_obj.list;

    // std::cout << "pParams->numFrames : " << pParams->numFrames <<std::endl;
    std::cout << "Current Frame Num : " << fr->frameNum <<std::endl; 
    // std::cout << "Current timeStamp : " << fr->timeStamp <<std::endl;
    //std::cout << "Current StreamID : " << fr->streamID << " is allowed to trackering ? " << (fr->doTracking ? "True" : "False") <<std::endl;
    std::cout << "Current Frame Decect Objs Num is : " << det_obj.numFilled << std::endl; 

    std::vector<NvMOTTrackedObj> *out_trk = new std::vector<NvMOTTrackedObj>;
    std::vector<NvMOTTrackedObjList> *out_trk_list = new std::vector<NvMOTTrackedObjList>;
    //NvMOTTrackedObj (*out_trk)[det_obj.numFilled];
    //NvMOTTrackedObjList (*out_trk_list) [pTrackedObjectsBatch->numAllocated];

    for (uint32_t i = 0; i < det_obj.numFilled; i++)
    {
        if((obj_to_trk+i)->confidence < 0.9) continue;
        std::cout << (obj_to_trk+i)->confidence << ":"
                  << static_cast<unsigned int>((obj_to_trk+i)->bbox.x) << ":"
                  << static_cast<unsigned int>((obj_to_trk+i)->bbox.y) << ":"
                  << static_cast<unsigned int>((obj_to_trk+i)->bbox.width) << ":"
                  << static_cast<unsigned int>((obj_to_trk+i)->bbox.height) <<  std::endl;

        NvMOTTrackedObj *OutObject = new NvMOTTrackedObj;
        OutObject->bbox.x = static_cast<unsigned int>((obj_to_trk+i)->bbox.x);
        OutObject->bbox.y = static_cast<unsigned int>((obj_to_trk+i)->bbox.y);
        OutObject->bbox.width = static_cast<unsigned int>((obj_to_trk+i)->bbox.width);
        OutObject->bbox.height = static_cast<unsigned int>((obj_to_trk+i)->bbox.height);
        OutObject->trackingId = i+1;
        OutObject->confidence = (obj_to_trk+i)->confidence;
        OutObject->classId = (obj_to_trk+i)->classId;
        OutObject->age = TRACKER_AGE_INFRAME;
        OutObject->associatedObjectIn = static_cast<NvMOTObjToTrack*>(obj_to_trk+i);
        //*out_trk[i] = *OutObject;
        out_trk->push_back(*OutObject);
    }

    for (uint32_t i = 0; i < pTrackedObjectsBatch->numAllocated; i++)
    {
        std::cout << "Num filled is : " << out_trk->size() << std::endl;
        NvMOTTrackedObjList *OutObject_list = new NvMOTTrackedObjList;
        OutObject_list->streamID = fr->streamID;
        OutObject_list->frameNum = fr->frameNum;
        OutObject_list->valid = true;
        OutObject_list->list = out_trk->data();
        OutObject_list->numFilled = out_trk->size();
        OutObject_list->numAllocated = out_trk->size();
        //*out_trk_list[i] = *OutObject_list;
        out_trk_list->push_back(*OutObject_list);
    }
    //pTrackedObjectsBatch->numFilled = 1;
    //pTrackedObjectsBatch->numAllocated = 8;
    //pTrackedObjectsBatch->list = static_cast<NvMOTTrackedObjList*>(out_trk_list.data());
    pTrackedObjectsBatch->list = out_trk_list->data();
    //pTrackedObjectsBatch->list = *out_trk_list;

    return NvMOTStatus_OK;
}

void NvMOT_DeInit(NvMOTContextHandle contextHandle)
{
    std::cout << "Exec NvMOT_DeInit ..." << std::endl;
}
1 Like

I have set KITTI output of tracker in config file. But all frame are empty.
When I use NVDCF, the outputs are correct.
So I suppose pTrackedObjectsBatch->list is constructed by mistake.

Any help?

After some research. I find the problems.
The key note is we should use memcpy to set pTrackedObjectsBatch->list.

Next I will try to test deepsort.

1 Like

Hello.I’m a new in TensorRT field.could you help me to solve some problems?
(1) How to generate a deepsort.engine file?
(2) could you share your work about inferencing with TensorRT c++ api?

Hi yangjinyi_gis,

Please help to open a new topic for your issue. Thanks

Hello @illusioncn, I am also trying to run YOLO and DeepSort. I have YOLO working just fine, but I have not found reference for DeepSort plugins in DeepStream. Did you manage to get it working? If so, would you mind sharing how you accomplished it?