I’ve been comparing the NVIDIA Jetson AGX and the Mac Mini, and it seems that the Mac Mini offers more memory (96 GB vs. 64 GB) at a similar price point. I’m curious if there’s a compelling reason to choose the Jetson AGX instead. Are there any benchmarks that compare the two?
Please refer to the benchmark result of AGX Orin from Jetson Benchmarks | NVIDIA Developer.
You may need to find similar test on Mac mini by yourself to do the comparision.
Thanks
Hi, I do not have access to either board.
but I used ChatGPT to provide a comparison.
can you give me your opinion about the accuracy of the comparison provided by the AI?
especially the comparison between different precisions (table 2)
Comparing the GPU capabilities of the Apple M4 Pro chip with a 14-core CPU and 20-core GPU to the NVIDIA Jetson AGX Orin 64GB reveals distinct differences tailored to their respective applications. Here’s a detailed comparison:
GPU Specifications:
Specification | Apple M4 Pro 20-Core GPU | NVIDIA Jetson AGX Orin 64GB |
---|---|---|
GPU Architecture | Custom Apple design | NVIDIA Ampere architecture |
GPU Cores | 20 cores (estimated 2,560 shading units) | 2,048 CUDA cores and 64 Tensor Cores |
GPU Clock Speed | Up to 1.8 GHz | Up to 1.3 GHz |
Memory | Unified memory, up to 64 GB with 273 GB/s bandwidth | 64 GB LPDDR5 with 204.8 GB/s bandwidth |
AI Performance | Not specified; utilizes a 16-core Neural Engine for AI tasks | Up to 275 TOPS (INT8) |
Ray Tracing Support | Hardware-accelerated ray tracing | Hardware support for ray tracing |
Manufacturing Process | 3 nm | 8 nm |
Power Consumption | Approximately 32W (estimated based on similar configurations) | Configurable between 15W and 60W |
Performance Metrics:
Metric | Apple M4 Pro 20-Core GPU | NVIDIA Jetson AGX Orin 64GB |
---|---|---|
FP32 Performance | Approximately 9.2 TFLOPS | Approximately 5.3 TFLOPS |
FP16 Performance | Approximately 18.4 TFLOPS (estimated, as FP16 is typically double FP32 performance) | Approximately 10.6 TFLOPS (estimated, as FP16 is typically double FP32 performance) |
INT8 Performance | Not specified; relies on Neural Engine for AI computations | Up to 275 TOPS |
Considerations:
-
Application Focus:
- Apple M4 Pro: Designed for consumer devices like laptops and desktops, emphasizing a balance between performance and power efficiency, with a strong focus on graphics performance and general-purpose computing.
- NVIDIA Jetson AGX Orin 64GB: Engineered for AI-intensive applications, such as robotics and edge computing, prioritizing AI inference performance with specialized Tensor Cores for deep learning tasks.
-
Memory Architecture:
- Apple M4 Pro: Features unified memory architecture, allowing the CPU and GPU to share the same memory pool, enhancing data access speed and efficiency.
- NVIDIA Jetson AGX Orin 64GB: Employs discrete memory architecture, with dedicated memory for the GPU, which can be advantageous for parallel processing tasks common in AI workloads.
-
AI Performance:
- Apple M4 Pro: While specific AI performance metrics are not detailed, it utilizes a 16-core Neural Engine to handle AI and machine learning tasks efficiently.
- NVIDIA Jetson AGX Orin 64GB: Delivers substantial AI performance with up to 275 TOPS (INT8), making it highly suitable for complex AI inference operations.
Conclusion:
The Apple M4 Pro 20-Core GPU offers robust graphics performance suitable for high-end consumer applications, with an emphasis on power efficiency and integration within Apple’s ecosystem. In contrast, the NVIDIA Jetson AGX Orin 64GB is tailored for AI-centric tasks, providing exceptional AI inference capabilities, making it ideal for industrial and research applications requiring intensive AI computations.
The Jetson modules are not meant to be used as standalone computers. They are meant to be integrated in custom hardware.
If you need a desktop system, go for a Mac.
If you need Linux, choose Jetson.
If you need to integrate own hardware, choose Jetson.
If you need CUDA or software that neeeds CUDA, choose Jetson.
My question was about how the two systems compare in terms of GPU performance in AI.
I do not care about desktop vs embedded applications right now.
My question is what are the constraints of what you are trying to build? The Jetson platform is made for robotics and power-constrained systems. Also what type of AI are you trying to run? It has become a buzz word for every single industry but depending on the type of AI; a jetson maybe better.
If your build is size-constrained but not power-constrained, I suggest looking into a computer from the Minis Forum or a custom mini-ITX computer. You can get a lot more computing than the Mac mini for $2000.
Here is why you would want to use a jetson:
- If the machine runs on custom hardware like a robot, (like @fchkjwlsq said)
- Jetsons have built-in vision accelerators that let you run object detection models at a fraction of power.
- If you plan on using docker with external sensors like cameras
- If you want access to all of the models that Nvidia has made for CUDA like Cosmos.
- If you want access to CUDA accelerated library like CuML or CuOPT
- If you want to write code that scales to more powerful platforms like Desktop, DGX or Digits
Here is why you would want to use a Mac mini:
- If you only plan on running LLMs or SLMs
- If you need hardware accelerated attracting (jetson does not have RT cores)
The Jetson AGX orin has been out for around 3 years now. With a refresh on the way if you can wait for Jetson AGX THOR or Project Digits, it would be worth your wait.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.