Jetson Tx2 inter-board PCIe connection

My application needs two jetson Tx2s together to meet the computational demand. This is the reply to a previous topic on the matter on the Tx1 board:
"The Tegra’s PCIe controller supports root complex. Hence you will need a PCIe switch with non-transparent (NT) bridging, which permits a root complex to be attached upstream. Some switches (like PLX’s 8700 series), include support for multiple NT ports, allowing multiple Jetsons to be hooked up per switch (3 at a time).

The big positive of the architecture is superior IPC bandwidth and offloaded RDMA. The ideal PCI switches are PLX 8717 or PLX 8724 (the 8724, if you require more lanes to connect additional PCIe peripherals in the system). The 8717/24 include 2 NT ports (allowing 3 Jetson) and 4 simultaneous DMA engines.

In theory it’s possible to cable this all up to test it, using the PLX RDK and the Jetson TX1 devkit’s desktop PCIe slot. I hope someone will! :)"

Is there anyone that has actually tried this? Are there additional useful information as regards the PCIe protocol to be developed? Tha

I don’t know very much about tying multiple PCI-express roots together, other than it requires some pretty advanced circuit board layout, and very fiddly-to-develop drivers, and the market size for your solution will be small so overhead costs will be high.

Have you considered going with a mini-ITX motherboard and a PC graphics card like a GTX 1050 Ti?

That’s going to give you more bang for the buck, AND won’t require any custom development. You’ll save months of time and dollars, too, in the final system. Two TX2 modules is $800, plus whatever hardware you have to build to make the bridging happen. A standard-component PC can out-perform that solution pretty handily:

Total cost: $525. Time to build: 1 hour.

The caveat might be that, if you are really battery life sensitive, the dual-TX2 solution will let you go farther. Then again, you can afford a bigger battery if you go with PC parts :-)

Dolphin Interconnect Solutions are offering PCIe cards and switches, and can connect multiple Tegra X2 boards to as switch.

They also have a software stack with everything from sockets API to IP driver and a low lever shared memory API (SISCI).

The Dolphin stuff looks cool. (They have a 16-port switch for about $7k?) Could give Fibre Channel a run for its money.

It seems like all their host adapters are 8x and 16x, which means they won’t fit in the 4x PCI-Express on the TX2. (Not to mention, it’s unlikely to be a cost effective solution to any kind of engineering challenge where the TX2 makes sense :-)