Changing memory clock rate and core clock rates for testing scalability

I saw in the following paper that they are changing the memory clock rate and the core clock rates on an NVIDIA GTX280.
http://download.microsoft.com/download/2/d…20on%20GPUs.pdf

How can I change these parameters?

Thanks.

I saw in the following paper that they are changing the memory clock rate and the core clock rates on an NVIDIA GTX280.
http://download.microsoft.com/download/2/d…20on%20GPUs.pdf

How can I change these parameters?

Thanks.

Your link leads to a 404 error (page not found), so it’s a little difficult to discern the context of your question.

But advice in general on GPU overclocking can be had from just about any enthusiast Forum. Don’t know if the CUDA-specific programmers are much into that sort of thing.

As for apps to accomplish this, try:

RivaTuner
eVGA Precision
nVidia System Tools with ESA support, specifically the Performance app
ATITool

Which can not only OC the card on a non-permanent basis but set up profiles for such to be applied at boot-up.

On a more permanent basis, you can edit the BIOS and reflash it to new clocks with NiBiTor and NVFlash…

Your link leads to a 404 error (page not found), so it’s a little difficult to discern the context of your question.

But advice in general on GPU overclocking can be had from just about any enthusiast Forum. Don’t know if the CUDA-specific programmers are much into that sort of thing.

As for apps to accomplish this, try:

RivaTuner
eVGA Precision
nVidia System Tools with ESA support, specifically the Performance app
ATITool

Which can not only OC the card on a non-permanent basis but set up profiles for such to be applied at boot-up.

On a more permanent basis, you can edit the BIOS and reflash it to new clocks with NiBiTor and NVFlash…

The nvapi allows you do change the clock rates from your own code.
http://developer.nvidia.com/object/nvapi.html

This is useful for making Shmoo plots which can you insight about whether your kernels are memory limited or not.

Of course I haven’t used this API myself, it’s been on my TODO list!

Ooh, now we can create our own version of RivaTuner!

Thanks for that link, btw. Must’ve been too long away from the site…

http://download.microsoft.com/download/2/d…20on%20GPUs.pdf

This is the link to the paper. Could you please be more specific about what needs to be done here? Has anyone used the NVAPI before?

Could someone from NVIDIA provide more help on this please?

Never used NVAPI, no idea how it works, sorry.