Increasing CGMA by making two calculation in one thread Matrix Multiplication

I have standard matrix multiplication algorithm which utilizes shared memory.
My task is to increase it’s CGMA by making two calculations in one thread.
But I have no clue how to start, should I make one block calculate twice of it’s size of matrix?
Such as:
Block x which has size of 32 calculates 2* 32 submatrixes.

Will this work? Can someone make me an example based on code below?

template <int BLOCK_SIZE> __global__ void
matrixMulCUDA(float *C, float *A, float *B, int wA, int wB)
{
    // Block index
    int bx = blockIdx.x;
    int by = blockIdx.y;

    // Thread index
    int tx = threadIdx.x;
    int ty = threadIdx.y;

    // Index of the first sub-matrix of A processed by the block
    int aBegin = wA * BLOCK_SIZE * by;

    // Index of the last sub-matrix of A processed by the block
    int aEnd   = aBegin + wA - 1;

    // Step size used to iterate through the sub-matrices of A
    int aStep  = BLOCK_SIZE;

    // Index of the first sub-matrix of B processed by the block
    int bBegin = BLOCK_SIZE * bx;

    // Step size used to iterate through the sub-matrices of B
    int bStep  = BLOCK_SIZE * wB;

    // Csub is used to store the element of the block sub-matrix
    // that is computed by the thread
    float Csub = 0;

    // Loop over all the sub-matrices of A and B
    // required to compute the block sub-matrix
    for (int a = aBegin, b = bBegin;
         a <= aEnd;
         a += aStep, b += bStep)
    {

        // Declaration of the shared memory array As used to
        // store the sub-matrix of A
        __shared__ float As[BLOCK_SIZE][BLOCK_SIZE];

        // Declaration of the shared memory array Bs used to
        // store the sub-matrix of B
        __shared__ float Bs[BLOCK_SIZE][BLOCK_SIZE];

        // Load the matrices from device memory
        // to shared memory; each thread loads
        // one element of each matrix
        As[ty][tx] = A[a + wA * ty + tx];
        Bs[ty][tx] = B[b + wB * ty + tx];

        // Synchronize to make sure the matrices are loaded
        __syncthreads();

        // Multiply the two matrices together;
        // each thread computes one element
        // of the block sub-matrix
#pragma unroll

        for (int k = 0; k < BLOCK_SIZE; ++k)
        {
            Csub += As[ty][k] * Bs[k][tx];
        }

        // Synchronize to make sure that the preceding
        // computation is done before loading two new
        // sub-matrices of A and B in the next iteration
        __syncthreads();
    }

    // Write the block sub-matrix to device memory;
    // each thread writes one element
    int c = wB * BLOCK_SIZE * by + BLOCK_SIZE * bx;
    C[c + wB * ty + tx] = Csub;
}

In case anybody is wondering: CGMA = compute to global memory access (ratio). Who came up with that FLA? I had to Google it.

“Will this work?” Well, what happened when you tried it?

Of course I tried it but it didn’t work. My main question is if I understand this correctly because I have to do a comparison of three types of Matrix Multiplication algorthims. As far as I understand perfectly first two of them, for the third I have no clue.
It goes like this:
Effectiveness comparison of parallel algorithms:
CPU 3 loops - ikj order
CPU 3 loops - ijk order
GPU process calculates two outputs/answers (increasing of CGMA), using shared memory (two areas - concurrent calculations and filling)

And have successfully done those calculations on CPU measured calculation times.
But I understand simple GPU matrix multiplication using global memory and shared memory using submatrixes.
But what I have to do I totally don’t understand.
Can somebody help me?

Didn’t work how?

I did something like this, but as I said I don’t know if I understand it correctly:

template <int BLOCK_SIZE> __global__ void
    matrixMulCUDA(float *C, float *A, float *B, int wA, int wB)
    {
        // Block index
        int bx = blockIdx.x;
        int by = blockIdx.y;

        // Thread index
        int tx = threadIdx.x;
        int ty = threadIdx.y;

        // Index of the first sub-matrix of A processed by the block
        int aBegin = wA * BLOCK_SIZE * 2 * by; //Increasing sub-matrix width for every thread to load

        // Index of the last sub-matrix of A processed by the block
        int aEnd   = aBegin + wA*2 - 1; //Increasing sub-matrix width for every thread to load

        // Step size used to iterate through the sub-matrices of A
        int aStep  = BLOCK_SIZE*2; // Iteration should also be increased 

        // Index of the first sub-matrix of B processed by the block
        int bBegin = BLOCK_SIZE * 2 * bx; //Same as above

        // Step size used to iterate through the sub-matrices of B
        int bStep  = BLOCK_SIZE * 2 * wB;

        // Csub is used to store the element of the block sub-matrix
        // that is computed by the thread
        float Csub = 0;
        float Csub2 = 0; //Added second output variable

        // Loop over all the sub-matrices of A and B
        // required to compute the block sub-matrix
        for (int a = aBegin, b = bBegin;
             a <= aEnd;
             a += aStep, b += bStep)
        {

            // Declaration of the shared memory array As used to
            // store the sub-matrix of A
            __shared__ float As[BLOCK_SIZE][BLOCK_SIZE*2]; //Totally no clue if I did that correct

            // Declaration of the shared memory array Bs used to
            // store the sub-matrix of B
            __shared__ float Bs[BLOCK_SIZE*2][BLOCK_SIZE]; //Same as above

            // Load the matrices from device memory
            // to shared memory; each thread loads
            // one element of each matrix
            As[ty][tx] = A[a + wA * ty + tx];
            Bs[ty][tx] = B[b + wB * ty + tx];

            As[ty][tx+BLOCK_SIZE] = A[a + wA * 2 * ty + tx];
            Bs[ty+BLOCK_SIZE][tx] = B[b + wB * 2 * ty + tx];

            // Synchronize to make sure the matrices are loaded
            __syncthreads();

            // Multiply the two matrices together;
            // each thread computes one element
            // of the block sub-matrix
    #pragma unroll

            for (int k = 0; k < BLOCK_SIZE; ++k)
            {
                Csub += As[ty][k] * Bs[k][tx];
                Csub2 += As[ty][k+BLOCK_SIZE] * Bs[k+BLOCK_SIZE][tx];
            }

            // Synchronize to make sure that the preceding
            // computation is done before loading two new
            // sub-matrices of A and B in the next iteration
            __syncthreads();
        }

        // Write the block sub-matrix to device memory;
        // each thread writes one element
        int c = wB * BLOCK_SIZE * by + BLOCK_SIZE * bx;
        int c2 = wB * BLOCK_SIZE * 2 * by + BLOCK_SIZE * 2 * bx;
        C[c + wB * ty + tx] = Csub;
        C[c2 + wB * ty + tx] = Csub2;
    }

It gives wrong answer

Hi,
I’ve actually modified that code in the past for 2-ILP, 4-ILP and 8-ILP … but I can’t find the code.

As I recall it’s a bit tricky. I will keep looking for my code. Until then, I suggest you
read the VOLKOV paper on this subject.

If you type in the basic examples (and measure the results using the debugger/profiler) you will generate a lot of insight into the problem.

–Bob