There are at least 2 general reasons, maybe 3
CUDA uses a fairly tight integration with the host compiler. For example data structure alignment in memory must match, for every possible/conceivable arrangement of data types and qualifiers. Also, object file formats must be completely compatible. So there is actual technical effort to create the interface in the first place. If you want to see all the moving parts in the CUDA compilation process, run nvcc with the --verbose switch.
Any interface must be validated. There is a QA effort/cost involved. Adding an additional configuration adds an additional dimension to the QA matrix, so the cost to add one major new dimension (like a compiler supported) can be considerable in terms of effort to validate the interface.
The premier compiler on Windows is the one that is provided by the Windows vendor, namely Microsoft. Large commercial codebases get developed on this platform. So it will be the first choice of supported compilers on windows. In order to support other compilers, there would have to be significant market demand, as evidenced by actual usage for large industrial/commercial/enterprise codebases.
The situation is not likely to change in the near future. If there is enough evidence of need and demand, I’m sure NVIDIA would respond, however a constant theme internally is to look for ways to reduce the support matrix, not increase it. This is so that limited resources can be focused where they will provide the most benefit.
There are no technical issues that absolutely prevent the support of additional compilers. But there are definitely resource limitations, and support for a new compiler platform on windows would require a significant expenditure of resource and effort. The fact that it does not exist today indicates so far that the cost/benefit scenario is not compelling.
My comments above apply to the NVIDIA-provided toolchain, of course. There exist other toolchains that target CUDA GPUs, my comments don’t apply to those toolchains or efforts.