Just installed XP so I can dual boot with vista now, so I can make use of the profiler counters in XP. As an aside, anyone able to explain the units for the counters like local load and gst coalesced and things like how to work out what percentage of load/store is being coalesced as my GTX260 doesn’t have uncoalesced counters it seems.
Anyway, back to the real problem - I’ve just tried running my raytracing program on XP - exactly the same build as vista and its reporting its running at 30fps whilst the same program on vista was running at 15fps. The figures are correct - I can tell the difference between 15fps and 30fps. My question: why is this happening? Given that the exact same program is being run in each case. Any ideas? The only options are driver differences surely? But this wouldn’t mean a 2x speedup or slowdown in my program surely.
Someone help please if you have any ideas or tests I can do to find the problem.
The overhead of a single driver call is enormous on Vista compared to XP because of all the additional overhead imposed by WDDM. If you’re making 50,000 kernel calls per second, that will add up pretty fast.
I know XP is an ok OS, but isn’t MS already on Win 7? And isn’t Nvidia working closer with MS now? Everybody is complaining how hard it is to develop for CUDA, nevertheless the current OS is not really supported. Nvidia, get on the phone and call Redmond!