Hello
Can I cluster TX1 & TX2 module with PCIE?
Is it possible make 512 core module? (256 core + 256 core)
-
I found the forum about the anwer is ‘no’.
https://devtalk.nvidia.com/default/topic/1015338/jetson-tx2-or-tx1-clustering-/
-
But, There are a product that looks like connet 2 jetson modules.
https://auvidea.com/j200-dual-tx1/
If it possible, let me know how to connect them.(withe H/w or S/W)
Please reply for me.
Have a good day :-)
Hello, since the Tegra PCIe controller supports root complex mode (i.e. it’s an upstream device), it would require using a non-transparent PCIe switch (like from PLX) to connect two root complex domains together over PCIe. So in theory if you had a carrier board with nontransparent-capable PCIe switch between two Jetson nodes, that could do it. The Auvidea J200 carrier uses gigabit ethernet between nodes.
Note that even in the non-transparent switch scenario, the GPUs are still kept separate (i.e. there will still be two ARM CPU complexes each with an independently-programmed 256-core GPU). The benefits of PCIe interconnect would be the remote DMA and high-bandwidth data pipe between nodes.