GTX470 PCIe extend cable solution

I want to test Chemistry CUDA ported applications on my GPU GTX470
For test my GPU GTX470 I need to connect the GPU to a rack mounted server 1U with PCIe-16x slot.
Obiously I will put my GPU out of the server, then I have found a cheap solution that consist in:

              GPU GTX470 <-->  Adapter PCIe 16x  <--> HDMI cable <--> Adapter PCIe 1x <--> PCIe Bus server

¿It is a wrong solution for the 1x Adapter card?

Other solution is connect a Ribbon Riser Card Cable (1 PCI-Express 16X) but si too short.

Are there any other cheap solution?

Thanks.

I cannot imagine what you are proposing will ever work.

Neither a GF100 nor GT200 will work with a 1U compute node. A single-slot GT240 like this one MIGHT work.

We built our first GPU cluster using re-purposed Atipa 2U compute nodes, where the only things we kept were the cases and power supplies. These had 25 amps on a single +12v rail which proved sufficient to power GTX 275 GPUs. Mounting the cards required flexible risers, however subsequent benchmarks showed a 50% loss of PCI bandwidth, compared to when the GPUs were plugged directly into the slots, or into rigid risers. From this experience, we have since built GPU compute nodes using generic ATX cases shelved in bakery racks. Others put 3 or 4 GPUs in a 4U case. In the future, we will probably employ more off-the-shelf solutions. If you have the idea of employing existing rackmount equipment for GPU computing, be ready for the likelihood that it won’t work due to some combination of insufficient power, cooling, expansion slots or space in the box for the GPU.

I’m very puzzled by the HDMI cable in the middle. Most of the external PCI-Express solutions I’ve seen (which are usually x4 links) use something like Infiniband cabling, as it has better signal characteristics.

In any event, this is going to be very, very hard to make work for any definition of “cheap”. Molex is selling connectors designed for external PCI-Express:

http://www.andovercg.com/datasheets/molex-74546-0813.pdf

But I don’t know where you buy them. Of course, the Tesla rackmount enclosures do exactly what you want, but they are not exactly cheap either.

A 1U dual-Fermi solution:

http://www.supermicro.com/products/system/…-TF.cfm?GPU=FM2

Flexible risers typically do not support PCI-E ver. 2.0 speeds.

This is what I was referring to as an “off-the-shelf” machine. Clearly GPU servers can come in any form factor if they are designed from scratch by an OEM. The question concerned using an existing rackmount compute node.

That is consistent with our experience, but I am still curious how you know this. Which ones do support PCIex16 2.0 speeds? Do you have any links? It is not a topic that has come up in these forums (or anywhere else that I am aware of.)

The guys at Adex Electronics (http://www.adexelec.com ) have been helpful with riser questions that I have had. Browsing their site, there are suggestions of “Gen 2” compatible ribbons.

Besides communication rate, I’d also be concerned with pushing 75W of power over a ribbon cable.

I only want to test the GPU performance with chemistry applications before invert a lot of money in a professional solution.

The cheapest way to do this is to find an existing workstation (not rackmount server) that has the power supply required for the GPU you are interested in. If the GTX 480, which requires a 6-pin and 8-pin PCI-E power plug, draws too much power, then perhaps a GTX 470 will be easier to install for testing purposes. Assuming you find the right computer, this costs $350-$500 for the GPU alone.

The next cheapest way is to build/buy a workstation with the required power supply and install a GPU in it (or have the system builder put it in for you). This can be done for less than $1500 (including the card) usually.

Thanks a lot! If we buy any more flexible risers, we will be sure that they are rated for PCIex16 2.0 speeds.

For the record, we bought 12 flexible risers from Logic Supply. From our experience, these DO NOT support Gen2 transfer rates. As measured by “bandwidthTest --memory=pinned”, PCIe bandwidth for the GTX275 cards connected with flexible risers is approximately half that of when the card is directly connected,

Connected via the flexible riser:

Host to Device Bandwidth for Pinned memory

.

Transfer Size (Bytes)   Bandwidth(MB/s)

 33554432			   2693.5

Quick Mode

Device to Host Bandwidth for Pinned memory

.

Transfer Size (Bytes)   Bandwidth(MB/s)

 33554432			   3285.3

Directly connected:

Host to Device Bandwidth for Pinned memory

.

Transfer Size (Bytes)   Bandwidth(MB/s)

 33554432			   5231.5

Quick Mode

Device to Host Bandwidth for Pinned memory

.

Transfer Size (Bytes)   Bandwidth(MB/s)

 33554432			   5532.1

Edit: The fixed risers we tested also performed at the higher throughput even though they were not advertised specifically as Gen2.

As for power draw, this has not been an issue so far. We stress tested the boxes when they were built and measured 90 watts idle at the wall and 320 watts under load, so the ribbon cable must be providing its share of the current. I do agree though that flexible risers are a design compromise and should be avoided, if possible.

Of course I am conscious that I wont have a optimal conditions (Temp/Power/…) but I will try to minimize the lacks.

Yes, I was seeing the molex cables to extend PCI Express but I need a 16x PCI Express with Stacked Connector insede the rack server (No problem) and at the other side I cannot connect directly the extend 16x PCI cable connector to the GPU because GTX470 haven’t a Stacked Connector. Molex have a good solutions in any case.

I know it, it’s my last option if there aren’t a cable extend solution. Of course, if the test are satisfactory I will buy a Supermicro GPU, NVIDIA Tesla, Asus GPU, … professional solution.