I bought from here:
Tried it and that one is 45.86. All good. I’ve got the QSFP56 NADDOD cable, by the way.
I’ve got one from ExxactCorp - the same PNY one that Microcenter used to sell when the Sparks were just released.
I got one last night, the build quality is terrible compared to the hp ones
Well, I bought one at first… :) Made another 2 hour roundtrip to the nearest Microcenter two weeks later :)
Which one did you get – it’s not clear.
And when I bought my first at Microcenter they were limiting people to 1 unit. Bought my second at Nvidia’s GTC in Washington but no cables there
The micro center one. I also have 3 hp made ones (all metal cage and good fabric knit cable). And two naddod.
BTW, the one Microcenter sells now is different from the one they used to sell. I’ve got Amphenol NJAAKR-0006 Cable Assembly from ExxactCorp (which is the same cable that Microcenter was selling under PNY brand, it seems like), and the quality is just fine.
Just to be clear, you said you got one on Dec 19 with terrible build quality, and that must be the MicroCenter one. So the HP and NADDOD ones are OK?
Interesting that you’re a collector of Cx7 cables.
Yes. And I am running an 8x Spark cluster.
I’m sure lots of people would be interested in the details of your setup, or at least what you have determined about the capabilities and performance. Are you using it mainly for fine tuning or RAG? If for inference I wonder if some Apple Studio Ultras using the new RDMA and Exo software with TB5 would be faster.
The Mac’s are roughly the same speed but with a slower TTFT (at least in the case of kimi). I do that and many more things, it’s a research cluster (for the sake of cluster research), mostly pushing the limits of what these can do and sharing knowledge with teams about various scales and capabilities with this system, and other workstation class ai systems.
Would you also please give your exact nvidia-smioutput?