OpenCL performance / temperature rising

Hello guys,

I have some doubts about OpenCL usage and temperature rising.

I have created an application to render some things using only directx and another application to evaluate some things using only OpenCL. After some tests, OpenCL application could not reach a high temperature like the DirectX application did. Video card usage is being reported as 100% on both applications.

Are there hardware differences on the video card that explains it? For example, OpenCL compute units cannot cause a temperature rising similar to the DirectX rendering application due hardware details? Or can it be only because the OpenCL application is not heavy enough?

Are OpenCL hardware parts made to avoid high temperature rising?

If the problem is because the OpenCL is not heavy enough, which OpenCL operations/actions inside the kernel or even among host and device could be heavier to the video card. I already know work itens/work group should be parameterized in such a way to mantain the video card totally busy.

Thanks in advance.

The hardware resources in a GPU are used by both graphics and compute workloads. Different workloads will utilize various parts of the hardware to different degrees, and different hardware resources make different contributions to the overall thermal load. That explains the temperature differences you are seeing. The workloads are different, so is the power consumption and thus thermal load. If you look at multiple graphics workloads and multiple OpenCL workloads you will likely find significant variations in power consumption between the workloads in each class. nvidia-smi can report the power consumption and temperature.