Implement 2D matrix transpose using warp shuffle without local memory

I ran the proposed solution in the link below and confirmed that the compiler allocates the register array to the local memory.
This seems to be because the index of the register array is determined to be threadIdx.x^i and the compiler cannot interpret it.
Is there any way to apply the 2D matrix transpose with warp shuffling without local memory allocation?

Transpose 2D matrix

Before:

thread 0: 00 01 02 03 04 05 06 07
thread 1: 08 09 0a 0b 0c 0d 0e 0f
thread 2: 10 11 12 13 14 15 16 17
thread 3: 18 19 1a 1b 1c 1d 1e 1f
thread 4: 20 21 22 23 24 25 26 27
thread 5: 28 29 2a 2b 2c 2d 2e 2f
thread 6: 30 31 32 33 34 35 36 37
thread 7: 38 39 3a 3b 3c 3d 3e 3f

After:

thread 0: 00 08 10 18 20 28 30 38
thread 1: 01 09 11 19 21 29 31 39
thread 2: 02 0a 12 1a 22 2a 32 3a
thread 3: 03 0b 13 1b 23 2b 33 3b
thread 4: 04 0c 14 1c 24 2c 34 3c
thread 5: 05 0d 15 1d 25 2d 35 3d
thread 6: 06 0e 16 1e 26 2e 36 3e
thread 7: 07 0f 17 1f 27 2f 37 3f

Suggested solution

#include <cstdio>
__global__ void t(){

  int u[8];
  for (int i = 0; i < 8; i++) u[i] = threadIdx.x*8+i;
  for (int i = 0; i < 8; i++) printf("lane: %d, idx: %d, val: %d\n", threadIdx.x, i, u[i]);
  for (int i = 1; i < 8; i++){
    int idx = threadIdx.x^i;
    u[idx] = __shfl_sync(0x000000FF, u[idx], idx);}
  for (int i = 0; i < 8; i++) printf("lane: %d, idx: %d, tra: %d\n", threadIdx.x, i, u[i]);
}

int main(){

  t<<<1,8>>>();
  cudaDeviceSynchronize();
}

link

#pragma unroll 8 in front of each for loop will help

#pragma unroll 7

(for the loop that is doing the work - not the printf loops) Yes, it may help with performance.

I don’t think it should affect the need for usage of local memory. The construct threadIdx.x^i (or threadIdx.x^1, etc., in the the unrolled case) cannot be reduced to a constant discoverable at compile time, which is required AFAIK for the promotion of items in the local space from an indexable array to registers.

There has to be a way to re-state the the shuffle expression
so that it reads and writes from u[threadIdx.x] and uses an idx
variable that is constructed differently.

note that u[threadIdx.x] if u is in the local space, also doesn’t satisfy the need I previously expressed (reading local array using an index that is a discoverable compile-time constant)

But I agree that something like that is the crux of the question. I don’t think a simple unroll by itself is sufficient.

Also, with respect to local space, another contributor to reduced performance is the non-uniform access pattern. Local space accesses, when actually targetting a local array, will tend to “coalesce” when each thread is reading the same array element index. That isn’t happening in the original forumulation. u[threadIdx.x] doesn’t satisfy that condition either.

Anyway, the problem interests me a lot because I’ve been looking for a register-only matrix transpose previously without much success.

I suppose we could use non-array variables like u0…u7 and manually perform the unrolling. It just makes the code dependent on the matrix size. And you cannot index by thread index anymore. ;-(

Looks like the approach suggested by monoid in the previous thread is the proper one.

It’s not obvious to me how that would work. If you had an “orderly” initial load of u0…u7, then you could not load or swap the same variables across threads. shuffle wouldn’t work, if one thread wants to be shuffling u0 while another thread wants to be shuffling u7.

One thing you could do is “transpose” the data at the initial load point of u0…u7. Then you could do a trivial transpose, swapping/shuffling the same variables across threads. But that strikes me as uninteresting. It pushes the problem somewhere else.

When I have struggled with this problem in the past, I encoded per-thread load orders. That made sense to some degree because the loading of data was not the entirety of work to be done. Here, the loads and stores are pretty much it, so concocting a load order array that has to be loaded by each thread, just so that another load could be optimized, doesn’t strike me as sensible, either.

I guess after thinking about it more, for me, anyway, the original question needs more clarification. If the data does not start in local memory, where does it start, exactly?

I would need to see an actual CUDA C++ example showing the exact data input organization, without any description that uses the word “register”. Even then, I may have nothing useful to add.

Thank you all. Your comments have been of great help to me.

@Robert_Crovella
I think I expressed it vaguely. As you said u array is correct to start in local memory, my question was, perhaps as you understand it, how can u araay be promoted to register array in local memory.

In conclusion, it seems difficult to transpose a 2D matrix using warp shuffle for an array promoted to register, right?
I am not sure if I understood correctly, but as @cbuchner1 suggested, manually unrolling the register array seems to inevitably involve many branch instructions and branch divergence.

That is eventually what I gravitated to. After spending some time thinking about how to express a transpose such that all movements could be expressed as compile time constants (kind of like sorting-networks methodology), I ended up concluding that the strategy expressed in the other thread should work. here is an example for 8x8:

$ cat t1984.cu
#include <cstdio>

// the movement
// start:
//  A B
//  C D
// step 1:
//  B A
//  C D
// step 2:
//  C A
//  B D
// step 3:
//  A C
//  B D

template <typename T>
__global__ void t(){
  // initialize data
  T u0, u1, u2, u3, u4, u5, u6, u7, uswap;
  u0 = threadIdx.x*8;
  u1 = threadIdx.x*8+1;
  u2 = threadIdx.x*8+2;
  u3 = threadIdx.x*8+3;
  u4 = threadIdx.x*8+4;
  u5 = threadIdx.x*8+5;
  u6 = threadIdx.x*8+6;
  u7 = threadIdx.x*8+7;
  // print data
  for (int i = 0; i < 8; i++)
    if (threadIdx.x == i) printf("%d %d %d %d %d %d %d %d\n", u0, u1, u2, u3, u4, u5, u6, u7);

  // perform 2x2 movement
  // moving single elements in 2x2 blocks
  // step 1:
  if (!(threadIdx.x&1)) {
    uswap = u1; u1 = u0; u0 = uswap;
    uswap = u3; u3 = u2; u2 = uswap;
    uswap = u5; u5 = u4; u4 = uswap;
    uswap = u7; u7 = u6; u6 = uswap;}
  // step 2:
  u0 = __shfl_xor_sync(0x000000FF, u0, 1);
  u2 = __shfl_xor_sync(0x000000FF, u2, 1);
  u4 = __shfl_xor_sync(0x000000FF, u4, 1);
  u6 = __shfl_xor_sync(0x000000FF, u6, 1);
  // step 3:
  if (!(threadIdx.x&1)) {
    uswap = u1; u1 = u0; u0 = uswap;
    uswap = u3; u3 = u2; u2 = uswap;
    uswap = u5; u5 = u4; u4 = uswap;
    uswap = u7; u7 = u6; u6 = uswap;}

  // perform 4x4 movement
  // moving 2x2 elements in 4x4 blocks
  // step 1:
  if (!(threadIdx.x&2)) {
    uswap = u1; u1 = u3; u3 = uswap;
    uswap = u0; u0 = u2; u2 = uswap;
    uswap = u5; u5 = u7; u7 = uswap;
    uswap = u4; u4 = u6; u6 = uswap;}
  // step 2:
  u0 = __shfl_xor_sync(0x000000FF, u0, 2);
  u1 = __shfl_xor_sync(0x000000FF, u1, 2);
  u4 = __shfl_xor_sync(0x000000FF, u4, 2);
  u5 = __shfl_xor_sync(0x000000FF, u5, 2);
  // step 3:
  if (!(threadIdx.x&2)) {
    uswap = u1; u1 = u3; u3 = uswap;
    uswap = u0; u0 = u2; u2 = uswap;
    uswap = u5; u5 = u7; u7 = uswap;
    uswap = u4; u4 = u6; u6 = uswap;}
  // perform 8x8 movement
  // moving 4x4 elements in 8x8 blocks
  // step 1:
  if (!(threadIdx.x&4)) {
    uswap = u0; u0 = u4; u4 = uswap;
    uswap = u1; u1 = u5; u5 = uswap;
    uswap = u2; u2 = u6; u6 = uswap;
    uswap = u3; u3 = u7; u7 = uswap;}
  // step 2:
  u0 = __shfl_xor_sync(0x000000FF, u0, 4);
  u1 = __shfl_xor_sync(0x000000FF, u1, 4);
  u2 = __shfl_xor_sync(0x000000FF, u2, 4);
  u3 = __shfl_xor_sync(0x000000FF, u3, 4);
  // step 3:
  if (!(threadIdx.x&4)) {
    uswap = u0; u0 = u4; u4 = uswap;
    uswap = u1; u1 = u5; u5 = uswap;
    uswap = u2; u2 = u6; u6 = uswap;
    uswap = u3; u3 = u7; u7 = uswap;}
  // print data
  for (int i = 0; i < 8; i++)
    if (threadIdx.x == i) printf("%d %d %d %d %d %d %d %d\n", u0, u1, u2, u3, u4, u5, u6, u7);
}

int main(){
  t<int><<<1,8>>>();
  cudaDeviceSynchronize();
}

$ nvcc -arch=sm_70 -Xptxas=-v -o t1984 t1984.cu
ptxas info    : 25 bytes gmem
ptxas info    : Compiling entry function '_Z1tIiEvv' for 'sm_70'
ptxas info    : Function properties for _Z1tIiEvv
    32 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 30 registers, 352 bytes cmem[0]
$ ./t1984
0 1 2 3 4 5 6 7
8 9 10 11 12 13 14 15
16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31
32 33 34 35 36 37 38 39
40 41 42 43 44 45 46 47
48 49 50 51 52 53 54 55
56 57 58 59 60 61 62 63
0 8 16 24 32 40 48 56
1 9 17 25 33 41 49 57
2 10 18 26 34 42 50 58
3 11 19 27 35 43 51 59
4 12 20 28 36 44 52 60
5 13 21 29 37 45 53 61
6 14 22 30 38 46 54 62
7 15 23 31 39 47 55 63
$

If you grep the SASS you will find no LD instructions. So I think although there is local memory usage (e.g. for the stack/call to printf), it’s not related to the algorithm.

2 Likes

Here is a 32x32 version. 90 registers, LoL:

$ cat t1986.cu
#include <cstdio>

// the movement
// start:
//  A B
//  C D
// step 1:
//  B A
//  C D
// step 2:
//  C A
//  B D
// step 3:
//  A C
//  B D
template <typename T>
__device__ __forceinline__ void myswap(T &a, T &b){ T s = a;  a = b; b = s;}
template <typename T, int s>
__device__ __forceinline__ void mymove(T (&u)[32]){
  const int s1 = 2*s;
  // step 1:
  if (!(threadIdx.x&s)) {
    #pragma unroll 16
    for (int i = 0; i < 16; i++){
      int i1 = i%s;
      int i2 = i/s;
      int i3 = i2*s1;
      myswap(u[i3+i1], u[i3+i1+s]);}}
  // step 2:
  #pragma unroll 16
  for (int i = 0; i < 16; i++){
    int i1 = i%s;
    int i2 = i/s;
    int i3 = i2*s1;
    u[i3+i1] = __shfl_xor_sync(0xFFFFFFFF, u[i3+i1], s);}
  // step 3:
  if (!(threadIdx.x&s)) {
    #pragma unroll 16
    for (int i = 0; i < 16; i++){
      int i1 = i%s;
      int i2 = i/s;
      int i3 = i2*s1;
      myswap(u[i3+i1], u[i3+i1+s]);}}
}

template <typename T>
__global__ void t(){
  T u[32];
  // initialize data
  for (int i = 0; i < 32; i++)
    u[i] = threadIdx.x*32+i;
  // print data
  for (int i = 0; i < 32; i++)
    if (threadIdx.x == i){
      for (int j = 0; j < 32; j++)  printf("%d ", u[j]);
      printf("\n");}
  mymove<T, 1>(u);
  mymove<T, 2>(u);
  mymove<T, 4>(u);
  mymove<T, 8>(u);
  mymove<T,16>(u);
  // print data
  for (int i = 0; i < 32; i++)
    if (threadIdx.x == i){
      for (int j = 0; j < 32; j++)  printf("%d ", u[j]);
      printf("\n");}
}

int main(){
  t<int><<<1,32>>>();
  cudaDeviceSynchronize();
}
$ nvcc -lineinfo -Xptxas=-v -arch=sm_70 -o t1986 t1986.cu
ptxas info    : 6 bytes gmem
ptxas info    : Compiling entry function '_Z1tIiEvv' for 'sm_70'
ptxas info    : Function properties for _Z1tIiEvv
    8 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 90 registers, 352 bytes cmem[0]
$ cuobjdump -sass ./t1986 |grep LD
$ compute-sanitizer ./t1986
========= COMPUTE-SANITIZER
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159
160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191
192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223
224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255
256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287
288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319
320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351
352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383
384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415
416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447
448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479
480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511
512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543
544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575
576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607
608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639
640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671
672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703
704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735
736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767
768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799
800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831
832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863
864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895
896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927
928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959
960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991
992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023
0 32 64 96 128 160 192 224 256 288 320 352 384 416 448 480 512 544 576 608 640 672 704 736 768 800 832 864 896 928 960 992
1 33 65 97 129 161 193 225 257 289 321 353 385 417 449 481 513 545 577 609 641 673 705 737 769 801 833 865 897 929 961 993
2 34 66 98 130 162 194 226 258 290 322 354 386 418 450 482 514 546 578 610 642 674 706 738 770 802 834 866 898 930 962 994
3 35 67 99 131 163 195 227 259 291 323 355 387 419 451 483 515 547 579 611 643 675 707 739 771 803 835 867 899 931 963 995
4 36 68 100 132 164 196 228 260 292 324 356 388 420 452 484 516 548 580 612 644 676 708 740 772 804 836 868 900 932 964 996
5 37 69 101 133 165 197 229 261 293 325 357 389 421 453 485 517 549 581 613 645 677 709 741 773 805 837 869 901 933 965 997
6 38 70 102 134 166 198 230 262 294 326 358 390 422 454 486 518 550 582 614 646 678 710 742 774 806 838 870 902 934 966 998
7 39 71 103 135 167 199 231 263 295 327 359 391 423 455 487 519 551 583 615 647 679 711 743 775 807 839 871 903 935 967 999
8 40 72 104 136 168 200 232 264 296 328 360 392 424 456 488 520 552 584 616 648 680 712 744 776 808 840 872 904 936 968 1000
9 41 73 105 137 169 201 233 265 297 329 361 393 425 457 489 521 553 585 617 649 681 713 745 777 809 841 873 905 937 969 1001
10 42 74 106 138 170 202 234 266 298 330 362 394 426 458 490 522 554 586 618 650 682 714 746 778 810 842 874 906 938 970 1002
11 43 75 107 139 171 203 235 267 299 331 363 395 427 459 491 523 555 587 619 651 683 715 747 779 811 843 875 907 939 971 1003
12 44 76 108 140 172 204 236 268 300 332 364 396 428 460 492 524 556 588 620 652 684 716 748 780 812 844 876 908 940 972 1004
13 45 77 109 141 173 205 237 269 301 333 365 397 429 461 493 525 557 589 621 653 685 717 749 781 813 845 877 909 941 973 1005
14 46 78 110 142 174 206 238 270 302 334 366 398 430 462 494 526 558 590 622 654 686 718 750 782 814 846 878 910 942 974 1006
15 47 79 111 143 175 207 239 271 303 335 367 399 431 463 495 527 559 591 623 655 687 719 751 783 815 847 879 911 943 975 1007
16 48 80 112 144 176 208 240 272 304 336 368 400 432 464 496 528 560 592 624 656 688 720 752 784 816 848 880 912 944 976 1008
17 49 81 113 145 177 209 241 273 305 337 369 401 433 465 497 529 561 593 625 657 689 721 753 785 817 849 881 913 945 977 1009
18 50 82 114 146 178 210 242 274 306 338 370 402 434 466 498 530 562 594 626 658 690 722 754 786 818 850 882 914 946 978 1010
19 51 83 115 147 179 211 243 275 307 339 371 403 435 467 499 531 563 595 627 659 691 723 755 787 819 851 883 915 947 979 1011
20 52 84 116 148 180 212 244 276 308 340 372 404 436 468 500 532 564 596 628 660 692 724 756 788 820 852 884 916 948 980 1012
21 53 85 117 149 181 213 245 277 309 341 373 405 437 469 501 533 565 597 629 661 693 725 757 789 821 853 885 917 949 981 1013
22 54 86 118 150 182 214 246 278 310 342 374 406 438 470 502 534 566 598 630 662 694 726 758 790 822 854 886 918 950 982 1014
23 55 87 119 151 183 215 247 279 311 343 375 407 439 471 503 535 567 599 631 663 695 727 759 791 823 855 887 919 951 983 1015
24 56 88 120 152 184 216 248 280 312 344 376 408 440 472 504 536 568 600 632 664 696 728 760 792 824 856 888 920 952 984 1016
25 57 89 121 153 185 217 249 281 313 345 377 409 441 473 505 537 569 601 633 665 697 729 761 793 825 857 889 921 953 985 1017
26 58 90 122 154 186 218 250 282 314 346 378 410 442 474 506 538 570 602 634 666 698 730 762 794 826 858 890 922 954 986 1018
27 59 91 123 155 187 219 251 283 315 347 379 411 443 475 507 539 571 603 635 667 699 731 763 795 827 859 891 923 955 987 1019
28 60 92 124 156 188 220 252 284 316 348 380 412 444 476 508 540 572 604 636 668 700 732 764 796 828 860 892 924 956 988 1020
29 61 93 125 157 189 221 253 285 317 349 381 413 445 477 509 541 573 605 637 669 701 733 765 797 829 861 893 925 957 989 1021
30 62 94 126 158 190 222 254 286 318 350 382 414 446 478 510 542 574 606 638 670 702 734 766 798 830 862 894 926 958 990 1022
31 63 95 127 159 191 223 255 287 319 351 383 415 447 479 511 543 575 607 639 671 703 735 767 799 831 863 895 927 959 991 1023
========= ERROR SUMMARY: 0 errors
$

It seems like we can coax the compiler down to 82 registers with -maxrregcount 82 (at least for sm_70 on cuda 11.4). Below that, you start to get spills.

2 Likes

One thing I wondered about: “is the non-local (register-only) version any better, performance-wise?”

According to the following test case, the register-only version is substantially better:

$ cat t1986.cu
#include <cstdio>
#include <cstdlib>
// the movement
// start:
//  A B
//  C D
// step 1:
//  B A
//  C D
// step 2:
//  C A
//  B D
// step 3:
//  A C
//  B D
template <typename T>
__device__ __forceinline__ void myswap(T &a, T &b){ T s = a;  a = b; b = s;}
template <typename T, int s>
__device__ __forceinline__ void mymove(T (&u)[32]){
  const int s1 = 2*s;
  // step 1:
  if (!(threadIdx.x&s)) {
    #pragma unroll 16
    for (int i = 0; i < 16; i++){
      int i1 = i%s;
      int i2 = i/s;
      int i3 = i2*s1;
      myswap(u[i3+i1], u[i3+i1+s]);}}
  // step 2:
  #pragma unroll 16
  for (int i = 0; i < 16; i++){
    int i1 = i%s;
    int i2 = i/s;
    int i3 = i2*s1;
    u[i3+i1] = __shfl_xor_sync(0xFFFFFFFF, u[i3+i1], s);}
  // step 3:
  if (!(threadIdx.x&s)) {
    #pragma unroll 16
    for (int i = 0; i < 16; i++){
      int i1 = i%s;
      int i2 = i/s;
      int i3 = i2*s1;
      myswap(u[i3+i1], u[i3+i1+s]);}}
}

template <typename T>
__global__ void t(int do_print){
  T u[32];
  // initialize data
  for (int i = 0; i < 32; i++)
    u[i] = threadIdx.x*32+i;
  if (u[0] > do_print)
    // print data
    for (int i = 0; i < 32; i++)
      if (threadIdx.x == i){
        for (int j = 0; j < 32; j++)  printf("%d ", u[j]);
        printf("\n");}
  mymove<T, 1>(u);
  mymove<T, 2>(u);
  mymove<T, 4>(u);
  mymove<T, 8>(u);
  mymove<T,16>(u);
  if (u[0] >= do_print)
    // print data
    for (int i = 0; i < 32; i++)
      if (threadIdx.x == i){
        for (int j = 0; j < 32; j++)  printf("%d ", u[j]);
        printf("\n");}
}

int main(int argc, char *argv[]){
  int do_print = 0;
  if (argc > 1) do_print = atoi(argv[1]);
  t<int><<<1,32>>>(do_print);
  t<int><<<80*1000,32>>>(do_print);
  cudaDeviceSynchronize();
}
$ nvcc -maxrregcount 82  -lineinfo -Xptxas=-v -arch=sm_70 -o t1986 t1986.cu
ptxas info    : 6 bytes gmem
ptxas info    : Compiling entry function '_Z1tIiEvi' for 'sm_70'
ptxas info    : Function properties for _Z1tIiEvi
    8 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 82 registers, 356 bytes cmem[0]
$ cat t1987.cu
#include <cstdio>
#include <cstdlib>
__global__ void t(int do_print){

  int u[32];
  for (int i = 0; i < 32; i++) u[i] = threadIdx.x*32+i;
  if (u[0] >= do_print)
    // print data
    for (int i = 0; i < 32; i++)
      if (threadIdx.x == i){
        for (int j = 0; j < 32; j++)  printf("%d ", u[j]);
        printf("\n");}
  #pragma unroll 31
  for (int i = 1; i < 32; i++){
    int idx = threadIdx.x^i;
    u[idx] = __shfl_sync(0xFFFFFFFF, u[idx], idx);}
  if (u[0] >= do_print)
    // print data
    for (int i = 0; i < 32; i++)
      if (threadIdx.x == i){
        for (int j = 0; j < 32; j++)  printf("%d ", u[j]);
        printf("\n");}
}

int main(int argc, char *argv[]){
  int do_print = 0;
  if (argc > 1) do_print = atoi(argv[1]);
  t<<<1,32>>>(do_print);
  t<<<80*1000,32>>>(do_print);
  cudaDeviceSynchronize();
}
$ nvcc -lineinfo -Xptxas=-v -arch=sm_70 -o t1987 t1987.cu
ptxas info    : 6 bytes gmem
ptxas info    : Compiling entry function '_Z1ti' for 'sm_70'
ptxas info    : Function properties for _Z1ti
    144 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 87 registers, 356 bytes cmem[0]
$ nvprof ./t1986 1024
==11142== NVPROF is profiling process 11142, command: ./t1986 1024
==11142== Profiling application: ./t1986 1024
==11142== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:  100.00%  152.52us         2  76.258us  7.8720us  144.64us  void t<int>(int)
      API calls:   97.80%  342.76ms         2  171.38ms  17.766us  342.74ms  cudaLaunchKernel
                    1.34%  4.6906ms         4  1.1727ms  588.52us  2.9028ms  cuDeviceTotalMem
                    0.70%  2.4534ms       404  6.0720us     303ns  277.77us  cuDeviceGetAttribute
                    0.11%  380.59us         4  95.147us  59.160us  189.63us  cuDeviceGetName
                    0.04%  155.54us         1  155.54us  155.54us  155.54us  cudaDeviceSynchronize
                    0.01%  25.081us         4  6.2700us  2.8620us  14.372us  cuDeviceGetPCIBusId
                    0.00%  9.6120us         8  1.2010us     395ns  4.8200us  cuDeviceGet
                    0.00%  3.2680us         4     817ns     650ns  1.1840us  cuDeviceGetUuid
                    0.00%  2.5920us         3     864ns     405ns  1.2770us  cuDeviceGetCount
$ nvprof ./t1987 1024
==11156== NVPROF is profiling process 11156, command: ./t1987 1024
==11156== Profiling application: ./t1987 1024
==11156== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:  100.00%  8.4755ms         2  4.2377ms  6.8480us  8.4686ms  t(int)
      API calls:   94.59%  286.63ms         2  143.31ms  12.607us  286.61ms  cudaLaunchKernel
                    2.80%  8.4926ms         1  8.4926ms  8.4926ms  8.4926ms  cudaDeviceSynchronize
                    1.67%  5.0492ms         4  1.2623ms  597.28us  3.2418ms  cuDeviceTotalMem
                    0.81%  2.4489ms       404  6.0610us     393ns  266.26us  cuDeviceGetAttribute
                    0.12%  376.16us         4  94.040us  60.237us  180.88us  cuDeviceGetName
                    0.01%  22.479us         4  5.6190us  2.7930us  11.912us  cuDeviceGetPCIBusId
                    0.00%  12.697us         8  1.5870us     463ns  5.1350us  cuDeviceGet
                    0.00%  3.3600us         4     840ns     697ns  1.1470us  cuDeviceGetUuid
                    0.00%  2.9690us         3     989ns     442ns  1.4120us  cuDeviceGetCount
$

For 80,000 32x32 transposes on V100, the local memory version runs in about 8ms whereas the register-only version runs in about 0.2ms. So I guess the extra code complexity might be “worth it” in some cases.

2 Likes

A long time ago I learned to put the question “What’s the fastest way to physically transpose a matrix?” into the same category as “What’s the fastest way to compute the inverse of a matrix?” and “What’s the fastest way to perform bulk memory copies?” In other words, when these questions are asked it is often a red flag that a sub-optimal course of computational action is being pursued: these operations should almost never be performed.

From having written the initial version of CUBLAS, where many of the API calls allow the specification of matrix transpose modes, I recall that none of these actually required the physical transposition of a matrix. Could someone educate me for which (common?) use cases the physical transposition of matrices is actually needed?

you have to admit, the task at hand poses a bit of an intellectual challenge, similar to asking “what’s the fastest way to solve a Rubik’s cube”? For this reason alone I like this thread.

As to where this is needed: Simulating (and implementing) multi antenna radio systems often requires multiplications of matrices with their conjugate transposed (hermitian adjoint) version. Receiver models such as Zero Forcing receivers will require inversion of the channel matrix.

receiver

Matrix sizes for such systems are often small enough to be held in registers entirely. This is the professional reason why I dabble with this.

There is a notable lack of CUDA libraries with a device side API that allow for efficient small size matrix operations. The Eigen library offers some support, but it’s handling each matrix with just one CUDA thread and it’s causing a lot of local/global memory access.

Thanks for the feedback.

I would argue that the CUDA universe was ahead of the curve by adding API support for batched small matrix operations before any other widely-used libraries. As I recall my research into this (ca. 2012 - 2013), the one-thread-per-matrix approach is optimal up to about a size of 8x8 to 10x10, depending on data type as that plays into register usage. Now, with the number of “CUDA cores” per GPU approaching 20,000, you better have some pretty large batches to work on to keep the GPU busy …

Beyond those tiny sizes, things become a bit more iffy. Up to 30x30 or thereabouts there are various approaches that can work reasonably well but need to be tuned differently for each GPU architecture. And above that up to about 128x128 things turn really ugly: excellent solutions in that space did not exist before I retired in 2014. No idea where things stand now. Beyond 128x128 one can use classical parallelization methods with blocking/tiling until GPU memory runs out.

Together with other design issues like various types of rectangluar matrices the number of required GPU kernels starts exploding, and then you need to craft a good “picker” (a heuristic that chooses the best kernel for a given scenario). Pretty soon you are looking at expending hundreds and thousands of engineering hours.

Technology companies like to run their teams lean, so there is never enough man power to get around to all the work one could be doing. That is why it is important for CUDA users to file RFEs for any feature that is desired, because that creates a squeaky wheel in the form of trackable issues that engineering managers can charge engineering time against.

1 Like

Batched APIs for small matrix operations are not optimal in that they are usually implemented as host side APIs and they require all input and output data to reside in global memory on the GPU. Then you have to chain multiple such API calls in order to get the final result of an operation. That causes a lot of global memory access and kernel launch overhead.

It’s much better for the programmer to be able to write a code line like this in one CUDA kernel, and intermediate results of such an arithmetic operation are ideally never written to local or global memory.

auto sZF = (H.conjugate().transpose() * H).inverse() * H.conjugate().transpose() * y; // ZF receiver

Matrix sizes I usually work with are up to 32x32, so operations can still be handled at the warp level.

As far as large batches are concerned, we have those - considering that you often want to simulate hundreds or thousands of radio links simultaneously at a frequency resolution of several hundreds or thousands of subcarriers.

It’s probably correct to say in most cases, transposing of matrices should be avoided. That doesn’t strike me as very controversial. Given that this is a public thread, it would probably be a good idea to put DON’T DO THIS at the top of the thread.

In my case, the motivation was something like cbuchner’s rubik’s cube. Compared to many people at NVIDIA and many people outside of NVIDIA (probably including njuffa and cbuchner) I am still pretty much of a beginner CUDA programmer. As a beginner, there is still much for me to learn. You can learn how to do harder things by trying those things, even if some of them are not realistic approaches to real world problem solving. In this case, (and in the previous thread that used threadIdx.x^i) it started out as merely an interesting challenge, perhaps like rubik’s cube. However, in this thread, I was interested in learning how to work on a more substantial problem and learn to be a careful programmer faced with the challenge to “keep things in registers”. Its not purely a trivial matter. First of all you need to come up with a method, second you can make mistakes along the way which will cause the compiler to drop out of “register mode” and start to use local memory. So it was a way of getting some practice at trying a somewhat more involved problem with that goal. I think it makes me a better programmer.

One of the most satisfying outcomes was the |grep LD part. The performance note in the last of my postings was really an afterthought. I generally have the mantra that you should code things up in a way that seems sensible to you, readable, maintainable, and only refactor things if you have solid evidence that there is a performance problem. So we had something like that (not well documented) at the beginning of this thread. As a result, the adjunct question for me was “does all this work actually make any sense?”. In the global sense, to njuffa’s point, you probably shouldn’t do this at all. But if you are going to do it (its a free country, after all) does it make sense to try and optimize it this way? I confess I was very doubtful before I ran the experiment. I was shocked to see the performance result. I am a bit skeptical of it still. I wonder if I have done something wrong. A 58x improvement seems implausible. There is something to be learned there. I have not learned it yet.

1 Like

Breaking the rules of conventional wisdom is fine as long as each case is carefully considered. And sometimes conventional wisdom becomes outdated as HW and SW contexts evolve. Likewise, exploratory programming approaches of all kind help us software engineers develop a solid feel for a computing platform and might evolve into interesting new paradigms. I highly recommend such tinkering. Unfortunately, many R&D engineers in industry often don’t have much time for it.

The reason physical transposition of matrices and bulk memory copies are typically discouraged is because data movement is very expensive (energetically and in terms of performance), so it should be minimized and should ideally only take place as part of computation. In other words, each time we touch some data, we want to do something transformative with it before we store it away. The rule against inverting matrices stems from the fact that it can easily have strong negative impact on the accuracy of the solutions produced compared to the use of a solver, which is no more computationally expensive.

That said, I recall at least one use case where I did explicitly compute the (pseudo-)inverse of a matrix. I up-converted the matrix to double precision, computed the Moore-Penrose pseudoinverse, down-converted the result to single precision, and then applied it to batches of hundreds to thousands of small single-precision matrices. Worked like a charm in that case (no issues with accuracy), and because the inverse needed to be computed just once, batched matrix multiplications made this faster than solving thousands of small systems.

1 Like

Many moons ago, I wrote a code generator that would emit CUDA/GLSL code that transposed an array in registers as long as it met a few conditions:

  • Row count must be an even number.
  • Col count must be a power of 2 (e.g. WARP_SIZE)
  • The transpose requires (cols_log2 * rows / 2) row-pair “blends”.
  • Rows might be permuted after the “blends” and can simply be “remapped” with a new symbolic name.
  1. The user-defined “blend” invocation exchanges values between two rows.
  2. The user-defined “remap” function renames a row.

I think it wound up looking very similar to the solution by @Robert_Crovella – it’s probably exactly the same strategy. 😊


An 8x16 (WxH) matrix wound up looking like this:

#define HS_TRANSPOSE_SLAB()                \
  HS_TRANSPOSE_STAGE( 1 )                  \
  HS_TRANSPOSE_STAGE( 2 )                  \
  HS_TRANSPOSE_STAGE( 3 )                  \
  HS_TRANSPOSE_BLEND( r, s,  1,   2,   1 ) \
  HS_TRANSPOSE_BLEND( r, s,  1,   4,   3 ) \
  HS_TRANSPOSE_BLEND( r, s,  1,   6,   5 ) \
  HS_TRANSPOSE_BLEND( r, s,  1,   8,   7 ) \
  HS_TRANSPOSE_BLEND( r, s,  1,  10,   9 ) \
  HS_TRANSPOSE_BLEND( r, s,  1,  12,  11 ) \
  HS_TRANSPOSE_BLEND( r, s,  1,  14,  13 ) \
  HS_TRANSPOSE_BLEND( r, s,  1,  16,  15 ) \
  HS_TRANSPOSE_BLEND( s, t,  2,   3,   1 ) \
  HS_TRANSPOSE_BLEND( s, t,  2,   4,   2 ) \
  HS_TRANSPOSE_BLEND( s, t,  2,   7,   5 ) \
  HS_TRANSPOSE_BLEND( s, t,  2,   8,   6 ) \
  HS_TRANSPOSE_BLEND( s, t,  2,  11,   9 ) \
  HS_TRANSPOSE_BLEND( s, t,  2,  12,  10 ) \
  HS_TRANSPOSE_BLEND( s, t,  2,  15,  13 ) \
  HS_TRANSPOSE_BLEND( s, t,  2,  16,  14 ) \
  HS_TRANSPOSE_BLEND( t, u,  3,   5,   1 ) \
  HS_TRANSPOSE_BLEND( t, u,  3,   6,   2 ) \
  HS_TRANSPOSE_BLEND( t, u,  3,   7,   3 ) \
  HS_TRANSPOSE_BLEND( t, u,  3,   8,   4 ) \
  HS_TRANSPOSE_BLEND( t, u,  3,  13,   9 ) \
  HS_TRANSPOSE_BLEND( t, u,  3,  14,  10 ) \
  HS_TRANSPOSE_BLEND( t, u,  3,  15,  11 ) \
  HS_TRANSPOSE_BLEND( t, u,  3,  16,  12 ) \
  HS_TRANSPOSE_REMAP( u,   1,   1 )        \
  HS_TRANSPOSE_REMAP( u,   2,   3 )        \
  HS_TRANSPOSE_REMAP( u,   3,   5 )        \
  HS_TRANSPOSE_REMAP( u,   4,   7 )        \
  HS_TRANSPOSE_REMAP( u,   5,   9 )        \
  HS_TRANSPOSE_REMAP( u,   6,  11 )        \
  HS_TRANSPOSE_REMAP( u,   7,  13 )        \
  HS_TRANSPOSE_REMAP( u,   8,  15 )        \
  HS_TRANSPOSE_REMAP( u,   9,   2 )        \
  HS_TRANSPOSE_REMAP( u,  10,   4 )        \
  HS_TRANSPOSE_REMAP( u,  11,   6 )        \
  HS_TRANSPOSE_REMAP( u,  12,   8 )        \
  HS_TRANSPOSE_REMAP( u,  13,  10 )        \
  HS_TRANSPOSE_REMAP( u,  14,  12 )        \
  HS_TRANSPOSE_REMAP( u,  15,  14 )        \
  HS_TRANSPOSE_REMAP( u,  16,  16 )
2 Likes