DeepStream 5 vs 6 inference time and calculate fps in the pipeline on Jetson Nano

  • Why the first three frame forever without send to python api, only measure inference time?

    → Can you specify which app you are using?

  • Whether only measure inference time or measure all flow cost time on DeepStream 5. I thought I was going to calculate average time(7m sec./ 100 frames) was very fast, but I found it was very unstable.

    –>When measuring performance, boost clock to make sure you get stable data.
    sudo nvpmodel -m 0 //you can get model level from /etc/nvpmodel.conf
    sudo jetson_clocks

  • Although it appears stable on DeepStream 6, but sometimes only measure inference time very fast (5m ~ 8m sec.). Why have something like that?

    → Did you mean you get big different performance for yolov3 on ds5 and ds6?
    *if yes, refer to this post, yolo perf low on ds6, there one fix, refer to comment 22 *
    Deepstream 6 YOLO performance issue - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums