It’s a system incompatibility. Tesla cards (with a few historical exceptions, e.g. C2075, K20c, K40c) are designed to be purchased and installed (only) in an OEM server system certified for their use. HP does not certify any of their workstations for any current Tesla cards, nor were any ever certified for K40m usage. If you buy a Tesla card believing you can install it in any system you want, you are asking for trouble. It is simply not possible, in the general case, and there is no design intent to make it possible.
There is no documentation to support this configuration. There is no OEM system load or card lock. The resources in question are resources that would be assigned to PCI BAR regions by the system BIOS during the PCI plug-and-play enumeration process. The K40m requires a large complement of resources. A system that cannot or will not assign these resources will cause the cards to be non-functional. There is nothing you can do to fix this (barring modification of user-accessible BIOS settings that modify the BIOS resource assignment behavior).
A server designed to support this card of course has taken these requirements into account in the design of the server, which includes the design of the system BIOS. It’s not a “lock” of any sort. In most cases, a PCIE Tesla GPU can be easily enough removed from a supported HP server configuration and placed in a supported Dell server configuration (just to pick a random pair/example), with full expectation that it should work normally.
But your workstation is not a supported configuration for that GPU. There are many statements like this on these forums.
Tesla K40 is an obsolete product. For non-obsolete products, you can find supported server configurations here:
[url]https://www.nvidia.com/en-us/data-center/tesla/tesla-qualified-servers-catalog/[/url]
There is no suggestion anywhere that Tesla cards can be placed in any system you want with an expectation of proper behavior.