Using deepstream with yolo models - performance on jetson nano?

I’m testing the yolo models that come with the 4.02 deepstream sdk.

The yolo3 model seems very accurate but I can only process 1 stream on a jetson nano at around 2fps. If I change yolo-v3.cfg to 416x416 I can get around 11 fps on a live source rtsp camera. AS soon as I add more cameras though it slowly dies and you get huge latency.

So I’ve tested the yolo3 lite model. Obviously I can get better fps and I can have it running in the deepstream-app with 4 rtsp sources but the accuracy seems very low. It struggles to find people unless they still still directly in front of the camera. I’ve played with varying settings like interval = 4 or 8 and a tracker and it makes no difference to the accuracy.

When using the standard deepstream resnet10 model that accuracy is great, but I notice false detections. Where it will say a chair inside our house is a person all the time for example. This is why I thought of using Yolo for better accuracy but it seems this is not the case with the yolo lite models.

Is there any guidance here…?

I read in another thread that yolo is not officially supported by deepstream??

Hi,

yolov3 is a highly compute intensive model for a nano so you will have trade-off accuracy for performance and tune the pipeline to fit your problem. You can try using the yolov3-tiny model in fp16 mode and use a CPU based tracker like KLT so max GPU resources are available for inference.

>>>I’ve played with varying settings like interval = 4 or 8 and a tracker and it makes no difference to the accuracy.

The accuracy is not dependent on the interval. If interval is set to 2, then inference is performed every third frame and the tracker helps keep a track of the objects for the frames inference is not performed.

Thanks chandrahasj, since the accuracy of yolo3 lite is far worse than the standard deepstream resent10 I will stick with resnet10 for now unless using more powerful hardware and then look at full yolo3. ;-)