DeepStream NvTracker and batch processing capabilities


I would like to run YoloV3 on 36 sources of 384x216 RGBA images. I would like to track objects in these images via NvTracker and I am wondering what is the performance or recommended number of sources (in batch processing) to use on Jetson Nano. Right now I have only tested on source with rendering but it’s slow but I want to test 36 sources without rendering.

Since YoloV3 only deals with static objects, what would you recommend to use for non-static objects, that is, I am mainly interested in classifying objects that are in motion (for example, I don’t care if a car is parked but only care of getting metadata for if the car is moving). I’m wondering if I can get metadata for simple motion detection (using OpenCV MOG2 or frame2-frame1 on CPU) or if NvDCF in NvTracker can give me Kalman filter object speed or in motion?

Thank you very much!!

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1

1 Like

Do you mean 36 videos but not 36 images?

Depends on the model you use and the processing you do in the pipeline.

NvDCF is to track the detected object bbox. There is no extra metadata for NvDCF output. And no object speed or in motion be in the output too.

Hi Fiona, yes, I do mean 36 videos of small resolution of 384x216. Or you can think of the 36 images from different sources (cameras or video).

There is no such proposal because the performance may be different for different pipelines and functions in application. There is some performance data for Jetson with our samples. Performance — DeepStream 5.1 Release documentation

DeepStream is a SDK for customer to implement inference application. It provide frameworks to integrate different HW capabilities in application. DeepStream does not directly provide models or prefer any model. You need to choose your own model according to your requirement. nvtracker is for tracking but not for motion detection even some motion detection algorithm has been used inside it. The algorithm is proprietary, and there is no object speed or in motion output from NvTracker.

Did you find a solution for applying MOG2?
The better question is that is it possible to modify the frame inside a custom plugin, and pass it to the next plugin’s buffer in the pipeline?

nope, I am still trying to solve this problem but I haven’t had much time on the weekends. Will let you know if I do.