8-pin Y splitter cable for GTX 580 looking for a suitable cable

I purchased two GTX 580 and put them into a workstation. However, I noticed that I don’t have enough power connectors. The power supply only has one 8-pin outlet. I need to have a 8-pin Y splitter. I found this online:

However, when I compare the heads, I found out they don’t match with the 8-pin in the GTX 580. They are slightly different. So this Y splitter cannot be used. Could anyone please tell me where I can get the correct 8 pin Y splitter for GTX 580? Thank you,

what power supply is it that you plan on running SLI 580’s?

I’d rather change the power supply rather than use a Y-splitter. Chances are that the power supply isn’t up to supporting two GTX 580 and you might lose endless hours trying to figure out where these weird malfunctions come from (if you realize at all there’s something wrong with your results).

This is an AMAX workstation with 4 PCI-E slots. It is supposed to carry at least three GPU cards. The power supply has two 8-pin outlets and two 6-pin outlets. I said only one because my current C2050 is using one 8-pin (and one 6-pin) already. So I only have one 8-pin outlet left for the two GTX 580.

Do you happen to know the correct 8-pin Y splitter? Thanks,

It might be difficult to find such a splitter because it has no valid use case. These power cables are there for a reason.
The mainboard slot can provide 75W, as can a 6 pin connector. An 8 pin connector can provide 150W. Obviously compared to an 8 pin connector, a six pin connector has more headroom relatively to its specification, so you sometimes get away with plugging a 6 pin cable into an 8 pin connector (it’s designed to mechanically fit that way).

A GTX 580 officially uses 244W, the C 2050 238W. That’s 726W altogether. Your power supply provides 2150W + 275W + 3*75W = 675W. Not a good idea to power your three GPUs with.

If you really insist on keeping your current power supply and using it for three GPUs, plug the two 6 pin cables into the C2050 and an 8 pin cable into each of th GTX 580. That way each GPU gets 225W - less than needed, but at least close and evenly distributed. Also the 6 pin connectors likely have a bit of headroom, so at least the C2050 hopefully gets enough power to work reliably.

Man, the main selling point of the Tesla GPUs is their supposed higher reliability. If you invest the money for a Tesla, why do you risk that with an undersized power supply? I hope you don’t want to do Cuda development on this machine. The time you may waste hunting down spurious “bugs” can easily outweigh the money saved.

Thank you tera. That’s very helpful. I check the power supply. It has 1200W capacity. From your explanation, it seems that even if it has 1200W in capacity, each 6 pin can only pass 75W and each 8 pin can only pass 150W from the power supply, no matter what. Is my understanding correct now?

I have a follow-on question now. I remember seeing some benchmark test saying that a GTX 480 take something like 450W in peak performance. How could be possible if, according to your explanation, each card cannot get more than 300W anyway?

If that’s the case, I think I have to get an add-on power supply on top of the current power supply. I know there are such power supply available in the market. Will this work?

Thank you,

That is how it works. If you have some regular 4 pin molex connectors, there might the the possibility of getting some more 12v power via a cable like this one.

450W will be for the whole system, not the GPU. The PCI-e standard is really limited to 300W for a 8pin + 6pin device.

That’s a surprising PSU that gives 1200 watts yet is so stingy with PCIE power…

I hate to admit it but I have used the power splitters before myself to get GPUs working in older machines.
I usually double check actual power use with a Kill-O-Watt. That won’t give you any info about power drawn per PCIE cable, but it gives you an idea of overall wattage.

A lucky fact is that CUDA apps tend to use only abut 2/3 of the wattage that graphics apps do, giving a very nice leeway. This is true even with intense 24/7 100% GPU computes.
Power use is much higher when those rasterizers are working hard… peak GPU wattage is always in torture tests like Furmark which continuously fire as many polygons into the rasterizers as possible.

Thank you for the answers. I have a final dumb question: if I use a 8pin splitter from a 8pin outlet, then will each splitter only gets 75W, or 150W?

It seems that my current PSU is enough to drive 3 GPU (but not for 4 GPU), but I just don’t have a good way to pass them to the GPUs. Perhaps I should try to see if that molex works or not.

This is hard to tell. In reality, it is not that a certain power rail will exactly provide its maximum rated power but never more. It is more like the quality of the power will slowly degrade the more power is drawn, and a certain line is drawn where the quality is still considered acceptable. Now because in Cuda mode every bit matters and can lead to a crash or completely wrong results if flipped, you rather want better quality power supply than in graphics mode. So common advice is to not even come close to the rated maximum power supply (although SPWorley also is right that running Cuda there is some extra headroom as the rasterizers are idle).

To come back to your original question, it’s often the contacts themselves that pose the problem, rather than the power supply behind it. So I’d rather assume that behind the splitter cable you get closer to 2x75W than 2*150W. And, because contacts are so problematic, I would not want to introduce unnecessary contacts by using a splitter cable.

I’d also not feel comfortable with an extra external power supply, particularly as your power supply seems to be capable of supplying the required power.

I’d recommend going with Avidday’s solution, using the existing PCIe power cables without splitter to get close to the specified power, and then provide some extra power through adaptors from molex connectors.

Yes.