16 GB of DDR2 recognized as 8 GB on Foxconn Destroyer?

I am building a computer with the Foxconn Destroyer motherboard, so that it will have the capacity for 4 Tesla C1060 boards.

According to the NVIDIS site:
[url=“High Performance Supercomputing | NVIDIA Data Center GPUs”]Page Not Found | NVIDIA

they were able to get it to work with 16 GB of (4x 4GB) DDR2 DIMMs from G.Skill memory.

Has anyone been able to replicate that and how?

I am using Kingston KVR800D2N6/4G 4 GB modules of DDR2 800 MHz and the BIOS version P13 recognizes only 8 GB.

Would appreciate help on how to fix this.

Thank you!

Just a thought - I don’t have anything really concrete to stand on -

The rank of the memory modules can make a difference. Quad rank is cheapest, then dual rank, then single rank. Intuitively, the memory controller can only drive certain maximum of “total” ranks added together … Maybe your motherboard can only “drive” eight ranks, and maybe the chips you’ve got are quad-rank, so it can only drive two chips total. Check it out, try looking up your DIMM module name on the Kingston site and see what rank they are, and try to find out what rank the G.Skill modules are. This could be the problem.

When I had 16 in, the BIOS would only recognize 4. Go into Linux and type free, and yeah, you’ve got 16. Allocate 14GB of pinned memory, yeah, that works, so…

The motherboard manual for the Destroyer only claims that it supports 8 GB. If nothing else, you should probably warn people on the build-your-own-page that 16 GB of memory is not officially supported in this motherboard.

What brand of memory was that? Do you know the exact model of that memory?

What do you mean by “allocate 14 GB of pinned memory”?

I don’t know if this has anything to do with it, but when I built my last personal-use computer, the Intel motherboard I used supported up to 8GB of RAM, but only if it was DDR 533 or 667; if I used DDR 800 (I did), then it only supported up to 4GB…I don’t think this was ever fixed either (meaning it may just be a limitation of that particular board).

I wonder if the foxconn has the same issue…it should say in the motherboard manual.

It was G.Skill memory (who do you think is responsible for that page).

Pinned = page-locked, so if you’ve got 14GB of page-locked memory allocated you have at least 14GB of physical memory.

Which model G.Skill? Was it F2-6400CL6Q-16GBMQ?

Could you explain how you did the page-locking of 14 GB memory, so that I can try it as well?

repeated calls to cudaMallocHost, or in 2.2 you can just try page-locking much more than four gigs at a time.

I can’t check on the model of the memory until Monday.

Are you one of the people at NVIDIA who have actually built the system with a Foxconn Destroyer

motherboard and 16 GB of G.Skill memory?

And although there were 16 GB (4 x 4GB) plugged in, the BIOS was reporting only 4 GB?

yes. I’m that guy.

Then, could you also make available a program that people can use to check the actually available memory,

as per your explanations in the previous posts.

You should really include it as part of the steps to verify your system in the instructions for building your own system.

I just upgraded the BIOS to P14 via a CD using the procedure from
[url=“BIOS Boot-CD - Howto: BIOS-Update per bootable CD | www.biosflash.com”]http://www.biosflash.com/e/bios-boot-cd.htm[/url]

With the new BIOS P14, the 16 GB of Kingston memory are recognized correctly as 16 GB by the BIOS!

run free from linux? it’s not that complicated.

ps: P14 fixes the bug where having four four PCIe slots filled reduces the peak bandwidth to each card by 40% or so, you should definitely be running that BIOS anyway. (didn’t know it was officially out yet)

BTW, tmurray, have you tested the Dell Precision T7400 with 2 Tesla C1060 cards yet?

If so, what card did you use to drive the monitor?

Can one use a PCI card with an NVIDIA GPU connected to one of the PCI-X slots that will remain free?

I did it once… I think I used a NVS 285 or something in the PCIe x1 slot. I seem to recall that I couldn’t get a PCI card to work, but I don’t remember.

It is a hassle because you can only drive one card off of the rail that has the 2 PCIe 6-pin connectors, so you need to take the bizarro 10-pin connector, jam a 6-pin to 8-pin adapter where the keying lines up, and then power the second card. That works fine. (Major props to my coworker for figuring that one out…)

Could you give more details?

Which 2 PCIe 6-pin connectors?

What bizarro 10-pin connector, and how exactly did you use it?

BTW, you should really give all these details on the NVIDIA web page that lists all the systems that you have tried the C1060 on.

T7400 only has two PCIe 6-pin connectors coming directly from the PSU. I wouldn’t call 2xC1060 on T7400 “supported” unless you get the Dell 10-pin to PCIe 8-pin connector, but it does work. There’s one proprietary ten-pin connector in the T7400 where six of the pins are keyed to a PCIe 6-pin connector, and there is sufficient power to drive 2xC1060s. If you put a 6-to-8 pin adapter on that connector, you can power the second C1060 while the two 6-pins power the first. It should be very straightforward if you have the connector, but I didn’t (apparently whether or not you get it depends on your configuration from Dell).

The build your own page http://www.nvidia.com/object/tesla_build_your_own.html says

Does that mean the line above is no longer true for this specific board?