So, now I’ve reinstalled the Mac system, booted into safe mode, tried another user account, reset the SMC, reset the PRAM, repaired permissions and this is still happening. Performance is 30% or so less on the Mac side than the Windows side and vs another Mac of the same kind, same OS, same software.
I don’t think it’s a hardware issue because the hardware seems to be working fine and since reinstalling the system didn’t work, it can’t be a kernel extension issue.
The only thing that I can think of that I’ve done differently on this machine is that under Windows, I installed the Nvidia system tools to check my GPU clock. I ran the Find Optimal test that was under the GPU performance tab and it blue-screened during the test.
I didn’t overclock it though - I am aware that this can cause issues and so I didn’t move any of the sliders. If this find optimal test actually affects the firmware, it should make this clear as that wasn’t my intention in using it. I wanted to see what the optimal clock speed was because I know Apple underclocks these chips. When I go back into the tab, it lists the clock rate as set at factory defaults 350MHz core clock and 800MHz shader clock (which is lower than the spec of the 9400M(G) - 450MHz, 1100MHz). There doesn’t seem to be anything changed so I don’t know what damage it may have done.
One game on the Windows side did come up with a message saying that the hardware seems to have changed. Plus, I have found console messages on the Mac side saying IGPU: family specific matching fails but I don’t know if that’s normal. It recognises the GPU device ok and the OpenGL device driver app monitors the GPU.
Is it possible that this find optimal test has somehow affected the Mac firmware on the GPU only? Maybe the EFI settings and not the Windows BIOS? If so, how do I go about fixing this issue or at least diagnosing if the firmware was affected in any way? I guess I could try adjusting the performance slider in the Nvidia tool to see if it sets things back the way they should be but I really didn’t want to change this stuff in case it makes the machine unbootable altogether.
Edit: it seems that Find Optimal is actually changing the clock speed of the GPU. I suppose it’s my fault for going past the agreements and clicking the button but there wasn’t a warning that the button would actually be modifying my clock speeds, I figured the Find Optimal button was going to check the model of my GPU and tell me what the optimal clock was from a database. Anyway, assuming that the clock was changed from factory defaults, why is it the same speed in Windows?
Also, I loaded up ATI tool and it shows 3 sliders. 2D is set at 150MHz, 3D min is 150MHz and 3D max is 450MHz. Can someone check if that’s what they are supposed to be? I don’t mind changing the clock again if I can just get the performance back to factory default. It seems odd that the Nvidia tool says the clock rates are at factory defaults and yet the Mac side is performing over 30% slower. This is around the same ratio between 450MHz and 350MHz.
Thing is, if I put the speed up by 30% so that the Mac side goes back to normal, this would surely mean the Windows side will go up 30% too. It would make perfect sense to me if the Find Optimal button had gone through various clock speeds and only managed to reach a lower setting before it crashed but only if both Mac and Windows were running at the same 30% slower speed. It makes no sense that one side is running 30% slower than the other on the same hardware settings especially when the Mac CUDA deviceDrv reports the same core clock.
Edit2: I see that someone has used the NVidia Control Panel to change the clock speed on a Macbook ok -
They went all the way to 550MHz, 1200MHz. I don’t know how that affects the lifetime of the components though but if I’m experimenting a lot with CUDA, the extra speed boost would be good - plus 60-70% increase in games means the difference between playable and not. I ramp the fan up faster anyway but is there a safe clock speed for this GPU? I was thinking about using 450MHz, 1000MHz. Which one is more important for CUDA? CUDA 2.0 actually listed the shader clock in deviceDrv but 2.1 lists the core clock.
I also noticed something interesting in the ATI tool, which is the dynamic clock rate of the 9400M. It changes clock based on what it’s used for. When you turn on the 3D test view, the clock rate is reported as 450MHz. How would that be if the Nvidia tool lists the clock at 350MHz? It surely can’t dynamically go above what the Nvidia tool states. This dynamic switching apparently introduces some latency too. I don’t know if this is what’s happening but the Cinebench benchmark will stutter briefly at the start and then settle into smooth motion. The clock seems to ramp up in steps 150->350->450 or something like that. I wouldn’t put it on high performance all the time myself but it can be done. In the ATI tool, you’d probably just set the lower 2 sliders higher up.