Hi,
May I ask how the MLperf score given below should be actually tested using the development board? Please provide installation and test methods.
If the development board can’t actually test the above scores, what are the test conditions for MLperf? Do you want to send your product to a third-party laboratory for testing?
Hi
It is tested with the production board but the devkit should also generate close results.
Below is the reference app and guidance for your reference:
Thanks.
Can you provide detailed mlperf test installation steps on the development board?
Hi,
The detailed steps are in the repository.
Please give it a check:
# MLPerf Inference v3.1 NVIDIA-Optimized Implementations
This is a repository of NVIDIA-optimized implementations for the [MLPerf](https://mlcommons.org/en/) Inference Benchmark.
This README is a quickstart tutorial on how to use our code as a public / external user.
**NOTE**: This document is autogenerated from internal documentation. If something is wrong or confusing, please contact NVIDIA.
---
### MLPerf Inference Policies and Terminology
This is a new-user guide to learn how to use NVIDIA's MLPerf Inference submission repo. **To get started with MLPerf Inference, first familiarize yourself with the [MLPerf Inference Policies, Rules, and Terminology](https://github.com/mlcommons/inference_policies/blob/master/inference_rules.adoc)**. This is a document from the MLCommons committee that runs the MLPerf benchmarks, and the rest of all MLPerf Inference guides will assume that you have read and familiarized yourself with its contents. The most important sections of the document to know are:
- [Key terms and definitions](https://github.com/mlcommons/inference_policies/blob/master/inference_rules.adoc#11-definitions-read-this-section-carefully)
- [Scenarios](https://github.com/mlcommons/inference_policies/blob/master/inference_rules.adoc#3-scenarios)
- [Benchmarks and constraints for the Closed Division](https://github.com/mlcommons/inference_policies/blob/master/inference_rules.adoc#411-constraints-for-the-closed-division)
- [LoadGen Operation](https://github.com/mlcommons/inference_policies/blob/master/inference_rules.adoc#51-loadgen-operation)
### NVIDIA's Submission
NVIDIA submits with multiple systems, each of which are in either the datacenter category, edge category, or both. In general, multi-GPU systems are submitted in datacenter, and single-GPU and single-MIG systems are submitted in edge.
This file has been truncated. show original
Thanks.
Thank you for sharing!
I have tested jetson benchmark and found the following score for my development jetson nano, but I am not sure if it is reasonable.
The official website did not find the relevant reference score of jetson nano, could you please provide it?
Hi,
Unfortunately, the official data is updated.
The only remaining benchmark score of Nano is testing with TAO model:
Thanks.
May I ask if the same hardware test for the same mode, such as ResNet50, is the score obtained by MLperf and Jetson benchmark the same?
Hi,
You can find the info under the benchmark table.
For example:
Jetson AGX Xavier and Jetson Xavier NX MLPerf v1.1 Results
These results were achieved on Jetson AGX Xavier Developer Kit and Jetson Xavier NX Developer Kit running Jetpack 4.6, TensorRT 8.0.1, CUDA 10.2
Thanks.
system
Closed
May 8, 2024, 5:40am
12
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.