I run int a strange issue after upgrading my graphics cards from TITAN to GTX980 Ti and OS from Windows 7 to Windows 10.
The same application that was abler to utilize 6GB of memory on the TITAN cannot allocated more than 4.5 GB on a GTX980 Ti.
I am compiling with a 64bit switch on.
The device_query shows 6GB of memory as well:
deviceQuery.exe Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 980 Ti"
CUDA Driver Version / Runtime Version 7.5 / 7.5
CUDA Capability Major/Minor version number: 5.2
Total amount of global memory: 6144 MBytes (6442450944 bytes)
(22) Multiprocessors, (128) CUDA Cores/MP: 2816 CUDA Cores
GPU Max Clock rate: 1291 MHz (1.29 GHz)
Memory Clock rate: 3600 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 3145728 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model)
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.5, CUDA Runtime Version = 7.5, NumDevs = 1, Device0 = GeForce GTX 980 Ti
Result = PASS
I even have a unit test that reproduces it:
TEST(GpuAllocator_AllocateDeallocate) {
static const frames_size_type SIZE = 1024 * 1024 * 512; // 512 mb
static const frames_size_type ALLOCATION_COUNT = 11; // 5.5gb total
void* pointers[ALLOCATION_COUNT];
for(frames_size_type i = 0; i < ALLOCATION_COUNT; ++i) {
pointers[i] = FramesLib::Memory::GpuAllocator::Allocate(SIZE);
}
for(frames_size_type i = 0; i < ALLOCATION_COUNT; ++i) {
FramesLib::Memory::GpuAllocator::Deallocate(pointers[i]);
}
}
This fail while trying to do allocation for i = 10.
Has someone seen this before? What can be the issue?