Inference with deepstream yolov5s-3.0 on 2 camera long delay (20-25s)

Description

Hi, I’m currently running inference with deepstream yolov5s-3.0 on 2 camera IP on a jetson nano 4GB and its runs with a long delay.
Here you can see the log

I saw that the jetson nano can handle multiple IP camera with full FPS. I have 20 seconds delay minimum.
My GPU is at 99% usage.

I tested with another model : yolov3 tiny and I had the same problem.

Environment

Deepstream:5.1
Jetpack: 4.5.1

Relevant Files

Here’s my config files and the model to run with deepstream

config_infer_primary.txt (444 Bytes) deepstream_app_config.txt (1.1 KB)
yolov5s.engine (19.9 MB)

Steps To Reproduce

I follow this tutorial to run yolov5 with deepstream :

https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/YOLOv5.md
I run the command deepstream-app -c deepstream_app_config.txt

If you have any idea ? Thanks

Hi,
Can you try running your model with trtexec command, and share the “”–verbose"" log in case if the issue persist
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation

Also, request you to share your model and script if not shared already so that we can help you better.

Thanks!

thanks for your reply,
I add the model in the post above

I run ./trtexec yolov5s.engine but it doesn’t support the model format

Hi @constantin.fite,

You may get better help here. Please post your query in DeepStream forum.

Thank you.

1 Like