Problem : I have developed a multi-GPU threaded application that uses the CUBLAS 5.0 library. This application has been tested on many different cards including 430, 580, 680, 690, M2090, K10, K20, K20x. The application heavily uses the CUBLAS libray and in particular calls to cublasCgemm. I late-bind w/ the driver at run-time and make the cublasCgemm calls in this manner. I use MKL OPENMP threading to launch multiple CPU threads to run multiple GPUs. This approach, and the application, has worked flawlessly on the HW + SW listed. I am receiving the [cudaErrorMapBufferObjectFailed = 14] from the CUBLAS cublasCgemm call on a CentOS 5.10 box using 2x GeForce GTX780 cards (driver 319.60). My application runs fine if I use one or the other 780 but if I try to use both the application crashes immediately when the threads start hitting the 780s w/ the cublasCgemm calls. I only receive this error on the situation described with the 2 780s. Can you please explain [cudaErrorMapBufferObjectFailed = 14] in more detail and help me understand why it is occurring in this isolated situation?
This newer posting likely sheds light on this report: