Excuse me in advance if this is a trivial question, but I couldn’t find the answer to it.
I have access to nVidia GTX Titan X GPU with 12GB of memory. clinfo command returns the correct value of CL_GLOBAL_MEM_SIZE:
Address bits: 64
Global memory size: 12884705280
Name: GeForce GTX TITAN X
I have written own code returning this parameter, but it shows a wrong value:
:::::::::::::::::::::::::::::::::::
Platform: NVIDIA CUDA
CL_DEVICE_NAME: GeForce GTX TITAN X
CL_DEVICE_ADDRESS_BITS: 64
CL_DEVICE_GLOBAL_MEM_SIZE: 4294770688
:::::::::::::::::::::::::::::::::::
The content of the corresponding .cpp file is here, [url]http://pastebin.com/YJPs8agw[/url]. I tried to compile it as
g++ -I/usr/local/cuda-7.5/targets/x86_64-linux/include program.cpp -l OpenCL
and
nvcc -I/usr/local/cuda-7.5/targets/x86_64-linux/include program.cpp -l OpenCL
Looks like it is compiled as 32-bit application.
Could anybody tell, how can I obtain all 12GB of global memory in the output of the program?
Thank you in advance,
Natalia