GTX480 to C2050 hack or unlocking TCC-mode on GeForce

It is not possible to bypass EEPROM on the GTX 580 series using the stock coolers. The base of the shroud covers the ROM entirely. It is not wise to power up the card without the base of the shroud because it acts as a heatsink for the VRMs and RAM chips.

Apparently I am incorrect. I took a GTX470 that I was about to RMA (as far as I can tell, just a display connector issue) and decided to short CE# (Pin 1) and VSS (Pin 4) on the ROM with the shroud off and a big fan blowing onto it and was able to completely bypass the ROM.

NVFlash recognized it as PCI Dev 0000.

The RAM chips didnt even get warm, but the heatsink and the VRM were quite hot to the touch. Probably not a good idea to leave it that way overnight, but apparently for a couple of minutes is fine.

Maybe good news for you hamster143 if you havent RMAed the card already.

Nice! By the way, you might only need to hotwire CE# (with Vss/GND) until after boot - when the expansion ROM code normally would have been loaded - as I’m suspecting nvflash won’t be able to flash a firmware if the EEPROM is still bypassed.

Interesting. But, if I understand correctly, that would involve breaking some seals and taking off the shroud and possibly the cooler, and that would greatly complicate my chances of RMA’ing the card (it had other problems aside from the PCI ID; I suspect a fried RAM chip). Since it’s still under warranty, I’ll let experts take a crack at it first. If they refuse to honor the warranty, then I’ll try this.

Correct, I forgot to mention that is an important point. Obviously you cannot flash a ROM that has it’s Chip Enable/Select disabled. In my particular case, Vss and CE# were basically pointing towards the SLI connectors, so all I did was just touch a wire between the two pins while powering on my computer.

Interesting, of all the video cards I’ve owned, none of them had seals. In fact, some companies dont even care if you remove their heatsinks and put on your own aftermarket modifications (see EVGA for example). Who manufactures your card?

Galaxy. I’m not 100% sure that there are physical seals, but I remember seeing stickers on the shroud saying “don’t remove or the warranty is void”, and there’s a long list of physical defects on their web site that make the card unreturnable.

I think I’ll be doing a little more research on the firmwares and the straps. Surely the double precision performance lock must be in there as well. And did somebody mention the ECC checking in memory was done in software? Not sure about that though. Also, did you folks notice the second copy engine?

Maybe ECC isnt completely done in software. Traditionally, ECC on RAM is done via a 9th chip (which stores all the parity data). In NVIDIA’s implementation though, they seem to be using the same (10) chips and then distributing the parity bits in some manner. When I enable ECC support in the drivers for the Quadro at work, I lose some amount of memory and bandwidth so I believe that the functionality is there in the hardware, just being disabled by the ROM.

I was unable to convert my GTX580 to a C2050. There were several bits that completely prevented my card from being recognized and after several of those, I was a bit afraid of accidentally flipping a bit that might have physically damaged the card.

This is certainly an interesting topic! I have a GTX 580 3GB card that I would LOVE to have TCC enabled for… VNC is so ridiculously slow to remote login, would be awesome to be able to use RDC. I also have a GTX 460 2GB and a GTX 460 768 MB… Would any of the GTX 460’s work as a ‘test’ for this experiment? I’d like to see if I can get one of those working w/ TCC before I mess around with the other (very expensive) video card.

As far as I know, all Fermi-based Quadro/Tesla are GF100 chips. The GTX46x are GF104 chips which means they are probably not compatible with most of the features of the professional cards.

Interestingly enough, I was able to take the ROM from the Quadro 6000 at work and flash it onto my GTX580 but it resulted in all sorts of screen corruption as well as having an unknown PCI Device ID (the exact ID eludes me at the moment).

You were pretty brave doing such a different crossflash, thankfully you didn’t brick the card, haha. By that logic, would that mean that trying the TCC hack with my GTX580 would be fruitless since it is a GF110 chip? Sucks that GTX480 cards are stil at least $275 USD!

May not necessarily. I’ve been reading that NVIDIA removed about 300 million transistors from GF100 to make GF110, but I havent seen the source of this “rumor” yet.

Here’s confirmation of that rumor, or at least I think it’s confirmation: Nvidia's GeForce GTX 580 graphics processor - The Tech Report

That article also states that the GF110 should have the ECC and DP support as GF100 does, so TCC might work… I will give this a shot now that I have a spare graphics card as a primary. It would be really nice to be able to remote desktop into my computer to do GPU simulations!

I’ve gone ahead and backed up my current GTX 580 firmware. I have attached it here also in case others want to take a look.
The interesting part is that the card is identified by nvflash as a ‘GF100B’, which after I googled that I didn’t stumble upon anything completely new, but now I’m optimistic that this card is nothing but a faster, yet locked down Tesla C2050.

Note: Attachment is not a text file, just had to rename the file extension as such.
Also attached is the nvflash output when reading Flash ROM.

The firmware modification is the easy part, but the softstraps going from the GTX 580 PCI ID to Tesla C2050 are another matter:

GTX580(1080): 0001 0000 1000 0000

C2050 (06D1): 0000 0110 1101 0001

So I’d have to change bits 0, 4, 6, 9, 10, 12. Based on the pattern the OP posted, however, I’m not totally sure how it continues, perhaps it is:

straps 0:

          -??+xxxx x??????? ??++++xx xxxxxxxx    

           ^^^      ^^^^^^^ ^^^^^^        

           |||      ||||||| ||||||-pci dev id[0]

           |||      ||||||| |||||--pci dev id[1]

           |||      ||||||| ||||---pci dev id[2]

           |||      ||||||| |||----pci dev id[3]

           |||      ||||||| ||-----pci dev id[5]?

           |||      ||||||| |------pci dev id[6]?

           |||      |||||||

           |||      |||||||--------pci dev id[7]?

           |||      ||||||---------pci dev id[8]?

           |||      |||||----------pci dev id[10]?

           |||      ||||-----------pci dev id[11]?

           |||      |||------------pci dev id[12]?

           |||      ||-------------pci dev id[13]?

           |||      |--------------pci dev id[15]?

           |||

           |||---------------------pci dev id[4]

           ||----------------------pci dev id[9]?

           |-----------------------pci dev id[14]?

- cannot be set, always 0

+ already known

? I would think these control the rest of the pci dev[n] markers, if I follow the pattern, but I have no idea.

Has anyone mapped this sequence more thoroughly? I’m tempted flash the modified firmware and flip the OR0 softstrap bits on the ‘?’ marks accordingly and see if I get the correct PCI ID to stick, (or at least 1 bit at a time, and try to see if I can map the rest of the bits I need) but slightly worried since this is a $700 video card…

Any particular ideas as to how to accomplish this in a (relatively) safe manner are welcome!

Edit: The pattern above is purely a guess. I’ve never dealt with firmwares in the past, so perhaps it wouldn’t make sense at all as far as how the firmware is coded and how it expects these triggering bits to be arranged.

I did not see where TechReport mentioned that ~300 mil transistors were removed.

I would be quite surprised though, that would mean functionality was removed and that NVIDIA never intends to release a GF110/GF100B-based Quadro. In fact NVIDIA added functionality, namely faster FP16 filtering and improved Z-culling.

GF100B was likely the internal codename used at NVIDIA until they released it and officially named it GF110. They added new functionality, which means it wasn’t just a silicon respin/bugfix it was probably intended to be.

I would be careful, some of those bits you listed as ? actually bricked my card and I had to do the ROM bypass procedure I mentioned earlier in the thread.

That’s what I was afraid of. I found a post in another forum w/ regards to recovering from a bad flash. The part that confuses me is when there is mention of: “I didn’t feel like completely unsoldering and swapping EEPROM chip this time, so I merely disconnected its power supply. As before, got a bootable system, connected EEPROM power once the system was up, and flashed the original ROM image.”

Granted that quote was from a GeForce 7950 GT AGP, but… is that essentially what you did with your EEPROM chip? (shorting the CE and VSS pins on bootup) I don’t think I feel particularly inclined to pretty much disassemble the card to recover from bad softstraps, so I might just hold off on this. The ‘disconnecting power supply’ part seems to be a bit of a misnomer if what you’re really doing is shorting the power… makes it sound simpler/easier, haha.

This guy actually went in and physically disconnected the Vcc trace on the PCB and probably hooked it up to some kind of switch. Vcc provides power for the chip’s logic. In effect, by opening the switch and not applying the power to the chip, it did not exist on bootup until he closed the switch and reapplied power (presumably when he got into command prompt).

The procedure I use for ROM recovery procedure is much more more simple and does not require you to modify your PCB. If you read the datasheet for my ROM chip, you will find out that there is a #CE (Chip Enable) pin. The # (sometimes you will also see a bar over CE which means the same thing) denotes that the logic is active-low (that is, if you apply a low voltage GND (logic-low), the chip is enabled, and if you apply Vss (logic-high), the chip is disbled). So when the card is first booting, I apply Vss to #CE. This makes the chip act like it is not there and will not respond commands from the GPU to read the contents of the ROM. The GPU is designed to work without this ROM in a sort of “safe” mode. As soon as you see a screen, it is probably safe to stop applying Vss to #CE as the GPU has stopped trying to ask the ROM for its data. It is important to stop applying Vss to #CE, as you will want nvflash to recognize that a ROM chip is there and you cant do that with the ROM disabled.

All you really need to do is remove the shroud, reattach just the “heatsink” proper to the GPU to protect the big chip. The pins Vss and #CE are orientated (at least on my card) correctly to be accessed while still in the slot in the computer, so all I had to do was touch a wire during power up. If this is not the case with other cards, you will need to solder in some wires and a switch.

*A little slightly off-topic note: It is common practice in IC design to have a #CE pin as you can have multiple chips on the same data lines. Then you only pull #CE low for the device that you are trying to access. The other chips which have their CE set to high will act like they do not exist on those data lines, basically the same thing as if you removed power from those chips.

**Disclaimer: As usual, I am not liable if you do something to blow your card up. In particular, static electricity is a big killer when dealing with messing chip-level logic like this and accidentally applying too much voltage to one of the pins (like if you accidently touched the 12V from the power supply input to one of the pins). Also do not try running any 3D graphics at all with your shroud off. You will more than likely overheat the RAM/VRMs and permanently damage the card.

Seems like NVIDIA came out with a bunch of cards and based on their PCI Device ID, they look to be GF110 based. I’m going to be hacking tonight :)

http://us.download.nvidia.com/XFree86/FreeBSD-x86/270.41.06/README/supportedchips.html

Tesla M2090 0x1091
Tesla M2075 0x1094
Tesla C2075 0x1096

Good news, I was able to modify the straps so that my device shows up as 1091 (M2090). However, as there are no Windows drivers for the new cards, I cannot verify if anything has changed. Modifying the INF file for older drivers shows that nothing has really changed. (And I did not get 2 DMA engines, as the C2050 conversion that ijsfz got.