Hello NVIDIA Team,
I am benchmarking a DeepStream application running on a Jetson platform (multi-camera pipeline) and would like to confirm the recommended NVIDIA tools / methods for measuring the following performance metrics:
-
Pipeline FPS (Total) – total frames processed per second across all cameras
-
Per-Camera FPS – individual FPS for each input source
-
Frame Drop Rate – percentage of frames dropped during pipeline execution
-
Inference Throughput – number of inferences completed per second
Currently, I am aware of options such as:
-
DeepStream built-in performance measurement (enable-perf-measurement)
-
Frame and source tracking using NvDsFrameMeta
-
Counting GIE outputs for inference throughput
-
System monitoring using jtop and tegrastats
I would like to know:
-
Which tools or APIs are officially recommended by NVIDIA for these metrics?
-
Are there any best practices or preferred approaches for accurate benchmarking on Jetson?
-
Is DeepStream’s built-in performance measurement sufficient for all these metrics, or should additional tools be used?
Any guidance or references to official documentation would be greatly appreciated.
Thank you.
*** Please note that this reply is generated by LLM automatically ***
To accurately benchmark a DeepStream application running on a Jetson platform with a multicamera pipeline, you’re on the right track by considering various options for measuring performance metrics. Here’s a comprehensive analysis of the metrics you mentioned and the approaches you can take:
-
Total frames processed per second across all cameras: This metric can be measured using DeepStream’s built-in performance measurement tools, such as the perf plugin. This plugin provides a simple way to measure the overall performance of the pipeline, including the total frames processed per second.
-
Individual FPS for each input source: To measure the individual FPS for each input source, you can use the fps plugin in DeepStream. This plugin allows you to measure the FPS for each input source separately.
-
Percentage of frames dropped during pipeline execution: DeepStream’s built-in performance measurement tools do not directly provide this metric. However, you can use the perf plugin to measure the total frames processed and the total frames dropped. By calculating the difference between these two values, you can estimate the percentage of frames dropped.
-
Number of inferences completed per second: To measure the number of inferences completed per second, you can use the inference plugin in DeepStream. This plugin provides a way to measure the inference throughput, which is the number of inferences completed per second.
In addition to DeepStream’s built-in performance measurement tools, you can also use other approaches to measure these metrics:
- System monitoring using
nvidia-smi and top: These tools provide a way to monitor the system’s performance, including the GPU utilization, memory usage, and CPU usage. By monitoring these metrics, you can get an idea of the system’s overall performance and identify potential bottlenecks.
- Frame and source tracking using Counting GIE outputs: This approach involves using a custom GIE (General Inference Engine) to count the number of frames processed and the number of inferences completed. This approach can provide more detailed information about the pipeline’s performance, but it requires more development effort.
While DeepStream’s built-in performance measurement tools are sufficient for measuring some of the metrics, additional tools may be necessary to get a more comprehensive understanding of the pipeline’s performance.
Here are some official documentation references that may be helpful:
- DeepStream Performance Measurement: This section of the DeepStream documentation provides an overview of the built-in performance measurement tools and how to use them.
- DeepStream Plugins: This section of the DeepStream documentation provides an overview of the available plugins, including the
perf, fps, and inference plugins.
- NVIDIA Jetson Platform Documentation: This documentation provides an overview of the Jetson platform and its features, including the
nvidia-smi and top tools.
In summary, while DeepStream’s built-in performance measurement tools are sufficient for measuring some of the metrics, additional tools may be necessary to get a more comprehensive understanding of the pipeline’s performance. By using a combination of DeepStream’s built-in tools and other approaches, you can accurately benchmark your DeepStream application running on a Jetson platform with a multicamera pipeline.
*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***
I am benchmarking a custom DeepStream multi-camera pipeline on a Jetson platform.
Below are the tools and methods I am currently using for benchmarking my pipeline code, and I would like to confirm whether these are appropriate and recommended for accurate performance evaluation.
Tools used for benchmarking:
Pipeline FPS (Total):
Per-Camera FPS:
-
DeepStream per-source performance measurement
-
NvDsFrameMeta (source_id) based frame counting
-
GStreamer pad probe per source
Frame Drop Rate:
-
Frame sequence number (frame_num) gap detection
-
PTS (presentation timestamp) gap analysis
-
fpsdisplaysink drop counter
-
GStreamer queue overflow monitoring
-
Expected vs actual frame count comparison
Inference Throughput:
-
GIE (nvinfer) output frame counting
-
NvDsObjectMeta / tensor output meta counting
-
DeepStream component latency measurement
System-Level Monitoring:
Could you please confirm if this set of tools and methods is suitable for benchmarking DeepStream pipelines on Jetson, or if NVIDIA recommends any changes or additions?
Thank you.
The method to benchmark the performance depends on your purpose.
If you concern about the DeepStream pipeline performance, since the pipeline works in asynchronized way, the total FPS when the pipeline runs in full speed mode is what we can get. In this mode, there is no internal frame dropping inside the DeepStream pipeline( the ethernet protocol packet dropping caused source frame drop has nothing to do with DeepStream pipeline and it can’t been measured by DeepStream). We have deepstream-app sample configurations to demonstrate how to measure the total performance of the pipeline. Performance — DeepStream documentation
There are also some performance related tips in Frequently Asked Questions — DeepStream documentation
If you want to measure the performance of every component inside the DeepStream pipeline, it may be useful for identifying the bottleneck of the pipeline, the total speed of the pipeline depends on the slowest component in the pipeline. The measurement will depends on its function. E.G. if you want to know the speed of the hardware video decoder, you can refer to Video Codec SDK | NVIDIA Developer
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks.