Hello,
At what speed we can expect GTX Titan to run any CUDA software under Linux: 836, 876 or 973 Mhz?
It seems that by default (80C target) the Titan runs under Windows mainly on the maximal core speed (around 1 GHz) not the boost one. It goes back to 836 only if the temperature rises above 80C.
Does Nvidia support boost 1.0 and/or boost 2.0 technologies under Linux?
I asked the same question in Linux category and Amber 12 developers (the CUDA software that I use under Linux), but they don’t know too and will ask Nvidia. Moreover, it seems that nobody knows is the boost speed of all GTX6xx series is supported under Linux? Further it is not clear why when you make query for your device you obtain:
|------------------- GPU DEVICE INFO --------------------
|
| CUDA Capable Devices Detected: 1
| CUDA Device ID in use: 0
| CUDA Device Name: GeForce GTX 680
| CUDA Device Global Mem Size: 2047 MB
| CUDA Device Num Multiprocessors: 8
| CUDA Device Core Freq: 0.71 GHz
|
|--------------------------------------------------------
Please share some experience with GTX6xx cards and boost technology in terms of CUDA calculations and performance under Linux. For instance, how we can see at what actual speed GTX680 runs some CUDA program under Linux?
Regards,
EDIT: The topic should be actually “The clock speed of GTX Titan for CUDA calculations”,sorry for that but I am not able to correct the title anymore…:(
What are you using to query the GPU device? On my GTX 680, I get the following from the deviceQuery sample:
Device 3: "GeForce GTX 680"
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock rate: 1058 MHz (1.06 GHz)
Memory Clock rate: 3004 Mhz
Memory Bus Width: 256-bit
Thanks for your answer! I have only GTX5xx GPU’s and just cited the results which were posted by Amber developers.
Your output was obtained under Linux, correct? If so, great because this is indeed the GTX680 Boost speed! I received this test which confirms that GTX6xx series presumably use the Boost speed: http://img22.imageshack.us/img22/1279/luxmarkubuntu1204.png
However from Nvidia claims this:
“Unfortunately no, boost 1.0/2.0 are only supported on Windows.” Thus for me it is still unclear what will be the real CUDA performance that one can expect from GTX Titan in terms of the clock speed…
If I can find a justification for a $1000 video card to my boss (really want to experiment with dynamic parallelism), I’ll be sure to let you know the clock rate… :)
(D15U-50 is GTX Titan running on NVIDIA 313.18 drivers) – this is EVGA SuperClocked edition)
root@Tesla:/usr/local/cuda/samples/1_Utilities/deviceQuery# ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Device 0: "D15U-50"
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.5
Total amount of global memory: 6144 MBytes (6442254336 bytes)
(14) Multiprocessors x (192) CUDA Cores/MP: 2688 CUDA Cores
GPU Clock rate: 928 MHz (0.93 GHz)
Memory Clock rate: 3004 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 1572864 bytes
Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65536), 3D=(4096,4096,4096)
Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 3 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >