Single allocation limit of 2GB on linux

Hello all.

Is there limit for a single allocation size under 64bit linux? (Tesla C2070, cuda toolkit v3.2, drivers v260.19.36, kernel 2.6.37). I cannot allocate more than 2GB at once.

Here is simple test program (compiled with [font=“Courier New”]nvcc -o mem -arch=sm_20 mem.cu[/font]):

#include <iostream>

using namespace std;

int main(int argc, char *argv[]) {

    double size_gigs = 2;

    int n = 1;

if (argc > 1) size_gigs = atof(argv[1]);

    if (argc > 2) n = atoi(argv[2]);

int size_bytes = size_gigs * 1024 * 1024 * 1024;

void *m[n];

for(int i = 0; i < n; i++) {

	m[i] = 0;

	if (cudaSuccess == cudaMalloc(&m[i], size_bytes)) {

	    cout << "Yay!" << endl;

	} else {

	    cout << "Oops" << endl;

	}

    }

for(int i = 0; i < n; i++)

	cudaFree(m[i]);

}

It tries to allocate argv[1] gigabytes argv[2] times. I cannot allocate more than 2GB at once. For example,

[font=“Courier New”]

./mem 1.99 1

Yay!

./mem 1.1 5

Yay!

Yay!

Yay!

Yay!

Yay!

./mem 2 1

Oops

[/font]

I’ve seen similar posts on this forum, but the answer was always “Its problem of Windows Vista and WDDM drivers. Switch to TCC or linux and be happy”.

How about using an “unsigned int” for the size? ;-) (or maybe unsigned long int if you need more than 4gb)

Ceearem

My first lol of the morning.

Shame on me!