How to reproduce the official benmarks of jetson orin nx 16G?

Description

I want to quickly use the trtexec tool to reproduce the jetson orin nx 16G v3.1 version benmarks data in the link (https://developer.nvidia.com/embedded/jetson-benchmarks). 
The method in the following link does not seem to be what I want (https://github.com/mlcommons/inference_results_v3.1/tree/main/closed/NVIDIA). 
I ​​currently have the following questions: 
1. Where are the corresponding model download links? 
2. What are the corresponding trtexec running parameters?

Are you looking for NVIDIA-AI-IOT/jetson_benchmarks: Jetson Benchmark (github.com)?

I have read this page and have the following questions:
1. The model in the csv on the page does not seem to correspond to the model on the Jetson Benchmarks | NVIDIA Developer page;
2. I tried to run the program and got the following error:
Please close all other applications and Press Enter to continue…
Setting Jetson orin in max performance mode
Traceback (most recent call last):
File “benchmark.py”, line 130, in
main()
File “benchmark.py”, line 28, in main
system_check.run_set_clocks_withDVFS()
File “/home/jetson/Desktop/benchmark/jetson_benchmarks/utils/utilities.py”, line 47, in run_set_clocks_withDVFS
self.set_clocks_withDVFS(frequency=self.gpu_freq, device=‘gpu’)
File “/home/jetson/Desktop/benchmark/jetson_benchmarks/utils/utilities.py”, line 77, in set_clocks_withDVFS
self.set_frequency(device=device, enable_register=self.enable_register, freq_register=self.freq_register, frequency=frequency, from_freq=from_freq)
File “/home/jetson/Desktop/benchmark/jetson_benchmarks/utils/utilities.py”, line 88, in set_frequency
self.write_internal_register(freq_register1, frequency)
File “/home/jetson/Desktop/benchmark/jetson_benchmarks/utils/utilities.py”, line 108, in write_internal_register
reg_write = open(register, “w”)
FileNotFoundError: [Errno 2] No such file or directory: ‘/sys/kernel/debug/bpmp/debug/clk/nafll_gpc1/rate’

I tried commenting out this line of code and got an error:
1、Terminal display
------------Executing ssd_resnet34_1200x1200------------
---------------------- 0 0 0
Error in Build, Please check the log in: ./models
Error in Build, Please check the log in: ./models
Error in Build, Please check the log in: ./models
We recommend to run benchmarking in headless mode

Model Name: ssd_resnet34_1200x1200
FPS:0.00

2、log file display
[07/18/2024-11:16:56] [E] Error opening engine file: ./models/mobilenet_v1_ssd_b32_ws1024_dla1.engine
[07/18/2024-11:16:56] [E] Failed to create engine from model or file.
[07/18/2024-11:16:56] [E] Engine set up failed

what’s your problem if you follow the instructions on inference_results_v3.1/closed/NVIDIA at main · mlcommons/inference_results_v3.1 · GitHub?

I looked at this page and the process was so complicated that I needed to use trtexec to quickly test and study it.

In fact, I just want to use trtexec to reproduce the timing (frame rate) data of the official test @junx2