I have an unusual situation of my code becoming less than fully functional after a driver update.
The thing is, I’m running a combination of GTX 1080 ti and GTX 980 on the same machine for some reason (makes no sense for gaming, I know, but my CUDA project used them both just fine and scaled as expected).
Today, came the new “game ready” driver and after the update, my code decided, that 980 had to fail somewhere down the line during either an allocation or cudamemcpy (maybe even cudaSetDevice, I haven’t checked). The result is only my GTX 1080 ti doing just fine and the program unable to use the machine’s full potential.
I tried to reinstall the new driver a couple of times and it didn’t help; then I just went ahead, found the old one and downgraded the thing, returning everything to the working condition.
Can you tell me, what can be the cause? And in case it’s a diver bug or something like that, how to report it?
Driver with the “problem” is 397.31, while the “working one” is 391.35 (and generally, everything, that’s come before). And in case it matters, I’m still using windows 7 (64 bit) on this particular machine.