As I have explained above, the inferencing performance is decided by the “preprocess+inference+postprocess” speed. Seems the “preprocess+inference+postprocess” speed can’t reach to 100 FPS.
@Fiona.Chen
But I think it’s very outrageous because I can’t even reach two frames per second.
Yes. You need to measure the batch size 4 engine performance and the customized postprocessing performance by yourself.
The batch size 1 engine can only handle one frame for one time. It is better to use batch size 4 engine for you have 4 sources.
You can also refer to the Nvidia DeepStream sample for how to optimize the postprocessing. deepstream_tools/yolo_deepstream/deepstream_yolo at main · NVIDIA-AI-IOT/deepstream_tools
@Fiona.Chen
Okay, thank you for your reply. I will find the best way to perform post-processing to reduce latency. Finally, I would like to ask where is the preprocessing part of DeepSteam for the YOLO model?
It is inside gst-nvinfer. The gst-nvinfer plugin is open source. The source code is in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer and /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer. There is also source code diagram for your reference. DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums
BTW. It is better to use the latest JetPack version and DeepStream version with your Orin Nano. There are many bug fixed and some optimizations.
Whenever I had the distortion issue, using rtspt://
(TCP) instead of rtsp://
in the URI fixed the issue.
@Y-T-G
Thank you for providing the method. After testing, it can indeed solve the problem of screen distortion, but it may cause some delay (depending on whether it is within personal tolerance)
For me, changing the nvurisrcbin
to use device memory, instead of unified memory fixed the latency issue.