Can the NVIDIA Jetson Orin Nano Super Developer Kit be connected together on the internet forming a super ai network?

Can the NVIDIA Jetson Orin Nano Super Developer Kit be connected together on the internet forming a super ai network where each Jetson is a node with it’s own unique knowledge kind of a mixture of experts?

NVIDIA Jetson Orin Nano Super Developer Kit can be connected to form a distributed AI network, similar to a mixture of experts model. This setup leverages the power of multiple Jetson devices to create a more robust and efficient AI system.

** Network Configuration**

The Jetson devices can be networked together into a Local Area Network, forming a cluster
2
. This cluster can be managed using Kubernetes technology, such as K3s, which allows for scalable deployment and management of AI workloads
6
.

** Mixture of Experts Approach**

The networked Jetson devices can implement a mixture of experts (MoE) architecture, where each device acts as an “expert” specializing in specific tasks or subsets of data
3

7
. This approach offers several benefits:

  1. ** Distributed Processing:** Each Jetson device can handle different aspects of AI tasks, allowing for parallel processing and improved overall performance.

  2. ** Specialized Expertise:** Different Jetson nodes can be trained on specific subsets of data, becoming proficient in handling particular types of inputs.

  3. ** Scalability:** The system can be easily expanded by adding more Jetson devices to the network, increasing the collective AI capabilities

  4. ** Efficient Resource Utilization:** The MoE approach allows for selective activation of specific experts, reducing computation costs and improving inference speed

Performance and Capabilities

The Jetson Orin Nano Super Developer Kit offers significant performance improvements over previous models, making it well-suited for this type of distributed AI network[

  • Up to 70% higher generative AI performance
  • Ability to run a wide range of LLMs, VLMs, and Vision Transformers
  • Support for models with up to 8B parameters

By combining multiple Jetson devices in a mixture of experts configuration, you can create a powerful, flexible, and efficient AI network capable of handling complex tasks while maintaining data privacy and reducing overall computational requirements

That sounds to be a interesting idea, we;re not able to comment it can or can not as this concept does not limit the HW device, the bottleneck is the SW solution. Once its muture, then all network devices can be a node for AI computing.

I know that a long time ago there was a guy who posted pictures and maybe YouTube content on creating a Beowulf cluster. This is certainly possible, but keep in mind that the ARM memory model and other “custom” features of a Jetson (such as the GPU being integrated directly to the memory controller) would change requirements compared to software which is typical on a desktop PC.

Btw, I consider Jetsons to be the best you can get for AI in an embedded system with low power. However, a single modern desktop gaming GPU would likely exceed the performance of many Jetsons in such a cluster. This doesn’t mean it wouldn’t be useful as distributing a system has both costs and benefits, but I don’t think you’d want to do this for only performance reasons; if you have other reasons, especially experimentation and learning, then this would be possible.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.