Tesla K80 in External Chassis

I am not sure if I am in the correct forum.

I currently run a Lenovo P900 Workstation w/ dual E5-2687W v3, 256G mem, and Tesla K40. I run 3D EM simulations (CST). I’d like to move up to a K80.

Do I pay a performance price for running an external chassis?
Are some better than others?

I see that NVidia has preferred vendors out there but I see no traffic on internet on how these boxes perform, related to cooling, power and especially performance.


I would definitely suggest only using K80 in a supported platform. If you search the CUDA installation and setup forum, you’ll find plenty of examples of the difficulty people have when installing K80 in an unsupported platform. The vendor should specifically state that the platform is certified to support K80, and you should order K80 already installed in the platform by the vendor.

Tnx txbob,

I’ve read multiple nightmares of people installing K80s in unsupported platforms. Hence my paranoia.

I plan to only order from companies on NVidia’s approved list (regardless of cost). I’m wondering about performance. Will a take a performance hit due to overhead?

Definitely appreciate the reply.


Performance specifics would depend on the exact chassis. Normally, cable-driven external chassis employ a PCIE bridge chip at each end of the cable link - one plugs into your workstation, the other plugs into the backplane in the external chassis. These PCIE bridge chips should introduce some additional latency but no net effect on sustained transfer rates. You should be able to still measure the same host to device bandwidth (approximately) in a system that uses the connection method I described. Although it’s not conveniently available anymore, the Dell C410x was such a chassis that behaved as I describe. Another comparable setup was the NVIDIA Quadroplex platform, also not conveniently available anymore.