I don't understand the execution time (k40c & GTX580).

Hello all¡¡

I have a cluster with…

./deviceQuery Starting…
CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 2 CUDA Capable device(s)
Device 0: “Tesla K40c”
CUDA Driver Version / Runtime Version 6.5 / 5.5
CUDA Capability Major/Minor version number: 3.5
Total amount of global memory: 12288 MBytes (12884705280 bytes)
(15) Multiprocessors, (192) CUDA Cores/MP: 2880 CUDA Cores
GPU Clock rate: 745 MHz (0.75 GHz)
Memory Clock rate: 3004 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 1572864 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 2 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 1: “GeForce GTX 580”
CUDA Driver Version / Runtime Version 6.5 / 5.5
CUDA Capability Major/Minor version number: 2.0
Total amount of global memory: 1536 MBytes (1610153984 bytes)
(16) Multiprocessors, ( 32) CUDA Cores/MP: 512 CUDA Cores
GPU Clock rate: 1544 MHz (1.54 GHz)
Memory Clock rate: 2004 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 786432 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65535), 3D=(2048, 2048, 2048)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (65535, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Peer access from Tesla K40c (GPU0) → GeForce GTX 580 (GPU1) : No
Peer access from GeForce GTX 580 (GPU1) → Tesla K40c (GPU0) : No

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 5.5, NumDevs = 2, Device0 = Tesla K40c, Device1 = GeForce GTX 580
Result = PASS


Resumming…
Device 0 → Tesla K40
Device 1 → GTX580.

I’m trying to compile a simple code addingmatrix.cu, and i want to compare different flags during compilation to my cards.

The outputs of my code is apparently ok in all cases.

In my code, when I put setDevice(0)… (k40, in theory)
compiling with “nvcc sumaMatrices.cu -arch=sm_20 -o code20” (I know this arch is not for k40, just trying, if there aren’t conflicting instrucions, it works.)
the execution of this code…
whith kernel 1.025984 msec
in cpu sequential 74.503777 msec

compiling with “nvcc sumaMatrices.cu -arch=sm_35 -o code35”
the execution of this code…
whith kernel 1.022112 msec
in cpu sequential 74.185089 msec

No differences… I hoped some little ones, but no prob.

In my code, when I put setDevice(1)…
compiling with “nvcc sumaMatrices.cu -arch=sm_20 -o code20”
the execution of this code…
whith kernel 1.270016 msec
in cpu sequential 75.304932 msec

compiling with “nvcc sumaMatrices.cu -arch=sm_35 -o code35”
the execution of this code…
whith kernel 0.001248 msec with right outputs…
in cpu sequential 75.268066 msec

My question is… when i put setDevice(1) (GTX 580, as deviceQuery says…) the result times are better with sm_35 flags??? Has it sense for you???
I hoped that this occur with the k40… but not with the GTX¡¡¡

Is possible that setDevice() is understanding a different argument that the number of device???

Thanks in advance…

How often did you measure each individual run?
If only once each, you probably measured only the initial code assembly times in the CUDA driver.
Consecutive runs should load a cached assembly the driver maintains.

That probably happened with your fast GTX 580 result. It’s not an SM 3.5 device, means it shouldn’t actually have executed the code you expected in your last measurement but most likely used the matching SM 2.0 assembly it had from the run before.

I think all will become very clear once you add proper status checking to your code. As Detlef points out, the GTX 580 is an sm_20 device, and cannot execute sm_35 code. The kernel compiled for sm_35 will fail to execute on the GTX 580 and control is returned to the host right away, explaining the extremely short “run time”.

Your initial observation is not unusual. The performance of matrix addition is bound by memory throughput, so ISA differences between sm_20 and sm_35 are unlikely to cause much of a difference. Also, much of the code transformation in the frontend of the compiler (up to the generation of PTX) is architecture independent. Only the backend that transforms PTX into machine code for a particular architecture is highly architecture specific. For a simple code like matrix addition, the machine code resulting from JIT compilation to sm_35 (from PTX generated with sm_20 target) is likely largely identical to the machine code generated for sm_35 in offline compilation (from PTX generated with sm_35 target).

If you repeat the experiment with many different kernels you should be able to find cases where compiling the PTX for a non-native target, then JIT compiling to the native architecture, shows a noticeable performance difference to code compiled straight for the native architecture.

I assumed something ran successfully in the last case because the output explicitly noted “with right outputs”, though an unchecked error would explain the comparably short runtime even better.

If the sm_20 version of the code ran first, and produced a valid result, that result could still be there in GPU memory when the sm_35 version of the app runs. So not checking kernel status and blindly copying back uninitialized GPU memory to the host may deliver data that looks correct (since it was deposited in GPu memory by the previous, correctly working, run).

The conservative way to test is to initialize the memory allocation for the result to NaN, using cudaMemset (ptr, 0xff, size).

First of all, Thanks you¡¡¡
I’m trying several things, but i’ve find a rare issue…

I’ve test initializing the memory allocation using cudaMemset… but it stills throwing the correct results…

Other things i’ve proved and shocked me is:
…but perhaps, it could explain everything (if i could know the reasons):

I was taking the name of the Device with:

   "cudaDeviceProp device;
    cudaSetDevice(1);
    cudaGetDeviceProperties(&device, 1);
    printf("With cudaDeviceProp:\n%s\n",device.name);

It prompts → GeForce GTX 580

And also i take the Device name with…

    char cdispname[255];
    nvmlDevice_t devicewatt;
    nvmlReturn_t ret;
    ret=nvmlInit();
    ret = nvmlDeviceGetHandleByIndex(1,&devicewatt);

    nvmlDeviceGetName(devicewatt,cdispname,254);
    printf("With nvml\n%s\n",cdispname);

And it prompts—>Tesla K40c

¿?¿?¿? Why nvml returns a different name??? It seems more appropiated to me… but now I don’t understand different names with the same tid¡¡¡
I’m compiling with setDevice(¿?¿?) Using the tid that shows the deviceQuery Sample.
I supposed that 0-Tesla, 1-GTX580
Am i executing in other card that i think???

I have the idea that it could explain the executions time issue, but i don’t know how to explain it.

nvml and CUDA enumerate GPUs in (potentially) different orders. If you are running a cuda code, you can always figure out the device a kernel will run on by doing cudaGetDevice immediately before the kernel launch (and/or cudaGetDeviceProperties). I would ignore nvml (and nvidia-smi) for this discussion about which device is being used.

The ~1us execution time of a kernel is not explainable by anything other than an error (failure to launch). Code compiled for sm_35 will not run on a sm_20 GPU.

If your claim is that you are compiling for sm_35 (only), running on a cc2.0 GPU, and getting correct results, that is simply incorrect. If you provide a completely worked example (code, compile command, output, and machine configuration), I’m sure someone could point out the error.

If you run your code/tests with cuda-memcheck and/or proper cuda error checking, I’m sure that will be instructive as well.

Exactly, all you were right.

My code had an error… that it’s very less interesting to discuss. The code wasn’t mine, and i assumed it as good.

Finally i found the error and everthing has sense… My right outputs weren’t too corrects¡¡¡ ;)

Thanks you all for your time, and now i have only one more question…

why nvml and CUDA enumerate GPUs in (potentially) different orders???

The short answer is that the CUDA designers a long time ago decided they want CUDA to enumerate “the most powerful” CUDA devices first. The reason for this is that naive codes that don’t survey what GPUs are available but just run on the default (enumerated zero) device will tend to run faster.

nvml enumerates in PCI order.

Ok.

Thanks for your answers and your time¡¡