Can I get the detection of each frame from the customer tracking plugin?

I have created a customer tracking plugin,that achieve SORT tracking. MOT(muti obj tracking) tracking is diffrent from SOT(single obj tracking) tracking, so I need the detection of each frame to associate the result. while I see the plguin only input just one detectin of first frame and input NULL detetion on next 2 frame,Any way can I get the detection of next 2 frame?

Do you mean you need at least 3 successive frames’ detection results for your algorithm? Even there is only one input every time, every frame will go into the plugin one by one. Your plugin needs to count and store the data(the detection results, e.g. bbox) by itself. When the next frame goes into your plugin, the data of last frame has been recorded inside your plugin, right? Your processing should happen with the third frame but not the first two frames.

If you use your customized plugin after DeepStream nvstreammux and nvinfer, the frame buffer contains batched frames which may contains several frames from different streams(when there are multiple streams input). The detection results are stored in batch meta(you can see it everywhere in our sample codes). https://docs.nvidia.com/metropolis/deepstream/dev-guide/#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_metadata.html#

The dsexample plugin is the sample of how to get meta and use the meta inside the plugin. The source codes are available in /opt/nvidia/deepstream/deepstream-5.0/sources/gst-plugins/gst-dsexample in your Nvidia device.

In this fuction: NvMOTStatus NvMOT_Process(NvMOTContextHandle contextHandle,NvMOTProcessParams *pParams,NvMOTTrackedObjBatch *pTrackedObjectsBatch)
I parse the pParams get the number of detection of each frame in det_obj = pParams->frameList->objectsIn ,and I print the numFilled:
** INFO: <bus_callback:166>: Pipeline running

0 frame det_obj number is det_obj.numFilled 8

1 frame det_obj number is det_obj.numFilled 0

2 frame det_obj number is det_obj.numFilled 0

3 frame det_obj number is det_obj.numFilled 8

4 frame det_obj number is det_obj.numFilled 0

5 frame det_obj number is det_obj.numFilled 0

9 frame det_obj number is det_obj.numFilled 8

10 frame det_obj number is det_obj.numFilled 0

11 frame det_obj number is det_obj.numFilled 0

12 frame det_obj number is det_obj.numFilled 7

13 frame det_obj number is det_obj.numFilled 0

14 frame det_obj number is det_obj.numFilled 0

15 frame det_obj number is det_obj.numFilled 5

16 frame det_obj number is det_obj.numFilled 0

17 frame det_obj number is det_obj.numFilled 0

18 frame det_obj number is det_obj.numFilled 6

19 frame det_obj number is det_obj.numFilled 0

20 frame det_obj number is det_obj.numFilled 0
I just get the number of detection in the frame of number of 0,3,9,12,15,18…the number of detection in other frame is 0,so i just associate the result between the number of 0,3,9,12,15,18…there is no detection in other frame,i can’t do the same thing.

What is it? Is it provided by DeepStream or TensorRT? Where does the pTrackedObjectsBatch come from? A detection network model? Have you integrated the detection model into DeepStream(with nvinfer)?

It is provided by Deepstream of course ,in gst-nvtracker.
Custom Low-Level Library:
https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.html#wwpID0E0X30HA

I know. You want to develop customized tracking library with nvtracker.

You see that there is only object detected with 0,3,6,9,… frames but there are no objects detected with other frames.
What is your problem? You want to get objects output with every frame?

This NvMOT_Process callback will be called for every batched frame buffer.

Yes ,I want to get objects output with every frame, NvMOT_Precess callback indeed be called for every batched frame buffer,but the debug log show it seems some frame detection result not be put into the fuction,and I think it’s be designed in the custom tracking library.
There is a note in gst-nvtracker SDK:
Note: The output object descriptor NvMOTTrackedObj contains a pointer to the associated input object, associatedObjectIn. You must set this to the associated input object only for the frame where the input object is passed in. For example:

•Frame 0: NvMOTObjToTrack X is passed in. The tracker assigns it ID 1, and the output object associatedObjectIn points to X.

•Frame 1: Inference is skipped, so there is no input object. The tracker finds object 1, and the output object associatedObjectIn points to NULL.

•Frame 2: NvMOTObjToTrack Y is passed in. The tracker identifies it as object 1. The output object 1 has associatedObjectIn pointing to Y.

But the objects are detected by the detection model. What is the model you are using? Do you want to resolve this problem with tracker or with nvinfer?

My fault ,
I set [primary-gie] interval=2, so the detection just work one frame then jumped two frame .
It works well ,thanks