Hello everyone.
I have to make a comparative benchmark between the performance of my jetson nano and that of the cpu (Intel) of my computer.
I saw that there is this benchmark executable on the jetson nano:
GitHub - NVIDIA-AI-IOT/jetson_benchmarks: Jetson Benchmark

The question is: Can I convert this code to run it on my computer (Macbook pro) that doesn’t have an NVidia graphics card?

Any type of benchmark or idea is accepted.


The GitHub is to inference some DNN models with TensorRT, which is GPU implementation.
If you want to compare it with CPU, you can modify the sample to other frameworks that support CPU mode (ex. onnxruntime).


Thanks for the reply.
Specifically, I should compare the models already present on the jetson on a different architecture (like my pc) in order to see the difference in FPS and accuracy.
I wanted to use something “stadard” for both architectures in order to have a fair comparison.

The aim is to obtain such a table Jetson Benchmarks | NVIDIA Developer but with a different architecture from that of the jetson nano.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.