I would like to run YoloV3 on 36 sources of 384x216 RGBA images. I would like to track objects in these images via NvTracker and I am wondering what is the performance or recommended number of sources (in batch processing) to use on Jetson Nano. Right now I have only tested on source with rendering but it’s slow but I want to test 36 sources without rendering.
Since YoloV3 only deals with static objects, what would you recommend to use for non-static objects, that is, I am mainly interested in classifying objects that are in motion (for example, I don’t care if a car is parked but only care of getting metadata for if the car is moving). I’m wondering if I can get metadata for simple motion detection (using OpenCV MOG2 or frame2-frame1 on CPU) or if NvDCF in NvTracker can give me Kalman filter object speed or in motion?
Thank you very much!!
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1