I referred to the following article at Jetson Nano,
Certainly 20FPS is displayed on the console when deepstream-app (yolov3) is executed.
I have made changes specified the comment above in these config files and we can achieve a throughput of 20 FPS.
However, I was wondering why the video playback of this demo was not smooth
Following the example of osd_sink_pad_buffer_probe, the detected objects are enumerated every frame.
Then, by setting interval = 5, I noticed that inference was executed only once every 5 frames.
And without the interval, we were able to infer that all frames were inferred, but it was around 3 FPS.
You are writing “we can achieve a throughput of 20 FPS.”
It’s true. This is not to say that “20FPS yolov3 inference” is possible.
It is that? Is this recognition accurate?
The following comment in the article says “Its trade-off that needs to be tuned for your use-case.”
Do you mean that?
After all, is yolov3’s inference too heavy for Jetson Nano?
Is there any other way to speed up Jetson Nano’s Yolov3 inference?
(No tiny model is required)
Please make sure to change the height and width to 416 in yolov3.cfg before generating the engine file.
If the tracking results are bad for your test video, you can reduce the interval to improve the accuracy
further but the FPS will drop as well. Its trade-off that needs to be tuned for your use-case.