This thread will serve as the GTX 1070 performance thread.
Card in possession :
MSI GeForce GTX 1070 DirectX 12 GTX 1070 GAMING X 8G 8GB
Price paid $459.00
Computer specs :
OS : Windows 7/Ubuntu 16.04
Intel Core i7-4770 CPU @ 3.40 GHz (Max # of PCI Express Lanes 16)
Corsair Vengeance 32GB (4x8GB) DDR3 1600 MHz (PC3 12800) Desktop Memory
Gigabyte GA-Z87X-UD5H Z87 (PCIE 3.0 x16 and 8x)
I also have a 750ti that I can setup as the dedicated display card and run the 1070 as a dedicated CUDA card. However, the processor I have only supports 16 PCI-Express lanes and the mobo is such that :
1 x PCI Express x16 slot, running at x8 (PCIEX8)
The PCIEX8 slot shares bandwidth with the PCIEX16 slot. When the PCIEX8 slot is populated, the PCIEX16 slot will operate at up to x8 mode.
So, let me know if you want results of such a configuration in addition to just the 1070 running as the sole graphics card in the system.
Please post up what tests you would like me to run (what is needed to run it) and I’ll ensure to do so over the weekend and post up the results.
Would have had this up sooner but ran into this gem :
Hardware :
Mobo : Gigabyte z87x-ud5h (16x PCI-E 3.0)
CPU : Intel Core i7-4770 CPU @ 3.40 GHz (Max # of PCI Express Lanes 16)
RAM : Corsair Vengeance 32GB (4x8GB) DDR3 1600 MHz (PC3 12800) Desktop Memory
GPU : MSI GeForce GTX 1070 DirectX 12 GTX 1070 GAMING X 8G 8GB (Display and CUDA card)
Software :
OS : Ubuntu 16.04 (Fresh install)
Driver : UNIX x86_64 Kernel Module 367.35
CUDA :
nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Wed_May__4_21:01:56_CDT_2016
Cuda compilation tools, release 8.0, V8.0.26
Max observed card clocks during test :
nvidia x-server settings
Real-time graphics clock : 1950 Mhz
Memory transfer rate : 8012Mhz
> nvidia-smi
Mon Aug 8 02:17:03 2016
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.35 Driver Version: 367.35 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 Off | 0000:01:00.0 On | N/A |
| 0% 50C P8 11W / 230W | 330MiB / 8112MiB | 6% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 10644 G /usr/lib/xorg/Xorg 184MiB |
| 0 11072 G compiz 143MiB |
+-----------------------------------------------------------------------------+
> Command : ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 1070"
CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 8112 MBytes (8506179584 bytes)
(15) Multiprocessors, (128) CUDA Cores/MP: 1920 CUDA Cores
GPU Max Clock rate: 1772 MHz (1.77 GHz)
Memory Clock rate: 4004 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1070
Result = PASS
Question :
Why is it that the old TitanX/980ti show (Run time limit on kernels: No) and the current
1070 shows it as (Run time limit on kernel: Yes)? Is this driver\software\ or hardware?
In the case that I intend to run a kernel for a week on the GPU, will this be a problem on the 1070?
Essentially, what is going on here? what is the consequence of a run-time limit?
Can it be bypassed via a command/setting?
Why the difference between the 1070 and the older 980ti/TitanX?
On to the bandwidth tests…
./bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...
Device 0: GeForce GTX 1070
Quick Mode
Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 11766.6
Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 12463.2
Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 191674.2
Result = PASS
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
./bandwidthTest --dtod --mode=range --start=1073741824 --end=1073741824 --increment=1
[CUDA Bandwidth Test] - Starting...
Running on...
Device 0: GeForce GTX 1070
Range Mode
Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
1073741824 190531.1
Result = PASS
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
For kicks, changed the powermizer setting to maximum performance at the end and re-ran the tests.
The only notable change was in the Bandwidth Test host-to-device value (11766.6 MB/s to 12399.4 MB/s) :
./bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...
Device 0: GeForce GTX 1070
Quick Mode
Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 12399.4
Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 12463.5
Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 190703.2
I noticed from the Device Query results: Run time limit on kernels: Yes
I still haven’t bought a Pascal card yet (Titan X is calling me though!) but I was under the impression that Pascal no longer required the watchdog timer. The GTX 1080 query results from Cudaaduc showed no watchdog… but thinking about it he’s probably testing in a multiple GPU system and the 1080 wasn’t used as a display so there was no watchdog.
The GP100 whitepaper touts Pascal’s preemption/scheduler improvements allowing long running kernels even when run simultaneously with graphics or other shorter kernels. I guess this shows that significant new feature is not in Pascal, but in P100 only.
I admit I got into the bad habit of coding multi-GPU simulations which aren’t broken up into millisecond chunks. My calculations usually run on 3 non-display GPUs with a single kernel launch that takes minutes or even days to finish.
Thank you for the reply. I will try to throw my 750ti back in my case later this week, run it as my display graphics card, and run the 1070 as a dedicated CUDA card and see if that changes anything.
Can someone with a 1070 as a dedicated CUDA card comment?