oh no, it’s still true. what was happening is that you would get <2 GB/s out of each board instead of the 3.1 GB/s you get now. still 8x.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
PCIe 16x wired as 8x effect on card use (gtx280, gtx295, C1060) | 13 | 16313 | January 12, 2009 | |
why is Tesla C1060 working at PCIe8X instead of 16X? | 3 | 2830 | August 31, 2009 | |
What hardware to get? | 6 | 5333 | August 10, 2008 | |
only one core of a quad-core CPU detected | 19 | 12213 | July 24, 2010 | |
Recommended NVIDIA Card? | 18 | 8562 | July 16, 2009 | |
problems with 177.80 drivers with this drivers device query show only two multiprocessors | 9 | 5404 | November 6, 2008 | |
PCI Express x16 bandwidth - host<->device transfer Bandwidth is much lower than should be | 38 | 68056 | April 18, 2008 | |
PH402 dual P100 64G RmInitAdapter failed, memory mapping issue? | 5 | 3967 | October 12, 2021 | |
Tesla P40 in Dell Percision 7910 rack | 10 | 2293 | February 16, 2024 | |
Memory bandwidth | 31 | 38413 | October 5, 2007 |