Convenience of 2D CUDA texture memory against global memory

I’m implementing a simple boxcar filter only as an excuse to evaluate the different speed of 2D local texture and global memory accesses.

More in detail, the .cu file is the following

#include   
#include 
#include "cufft.h"
#include "Kernels_Test_Texture_Float.cuh"

#define BLOCK_SIZE_x 16
#define BLOCK_SIZE_y 16

/**********************/
/* TEST TEXTURE FLOAT */
/**********************/
extern "C" void Function_Test_Texture_Float(float* data, float* dev_result, int N1, int N2){

    size_t pitch; 
    float* data_d;
    cudaMallocPitch((void**)&data_d,&pitch, N1 * sizeof(float), N2);
    cudaChannelFormatDesc desc = cudaCreateChannelDesc();
    cudaBindTexture2D(0,&data_d_texture,data_d,&desc,N1,N2,pitch);
    cudaMemcpy2D(data_d,pitch,data,sizeof(float)*N1,sizeof(float)*N1,N2,cudaMemcpyHostToDevice);

    cudaMemset(dev_result,0,sizeof(float)*N1*N2);
    dim3 dimBlock(BLOCK_SIZE_x,BLOCK_SIZE_y); dim3 dimGrid(N1/BLOCK_SIZE_x + (N1%BLOCK_SIZE_x == 0 ? 0:1),N2/BLOCK_SIZE_x + (N2%BLOCK_SIZE_x == 0 ? 0:1));
    Kernel_Test_Texture_Float(dev_result,N1, N2);

}

/**************/
/* TEST FLOAT */
/**************/
extern "C" void Function_Test_Float(float* data, float* dev_result2, int N1, int N2){

    float* data_d;  cudaMalloc((void**)&data_d,sizeof(float)*N1*N2);
    cudaMemcpy(data_d,data,sizeof(float)*N1*N2,cudaMemcpyHostToDevice); 

    cudaMemset(dev_result2,0,sizeof(float)*N1*N2);
    dim3 dimBlock(BLOCK_SIZE_x,BLOCK_SIZE_y); dim3 dimGrid(N1/BLOCK_SIZE_x + (N1%BLOCK_SIZE_x == 0 ? 0:1),N2/BLOCK_SIZE_x + (N2%BLOCK_SIZE_x == 0 ? 0:1));
    Kernel_Test_Float(dev_result2,data_d,N1, N2);

}

The .cuh file is the following

texture<float,2> data_d_texture;

/**************************/
/* 2D TEXTURE TEST KERNEL */
/**************************/
__global__ void Kernel_Test_Texture_Float(float* dev_result, int N1, int N2)
{
    int i = threadIdx.x + blockDim.x * blockIdx.x;
    int j = threadIdx.y + blockDim.y * blockIdx.y;

    float datum, accumulator=0.;

    int size_x=5;
    int size_y=5;

    if((i<(N1-size_x))&&(j<(N2-size_y)))
    {
        for (int k=0; k<size_x; k++)
        for (int l=0; l<size_y; l++){
            datum = tex2D(data_d_texture,i+k,j+l);
            accumulator = accumulator + datum;
        }
        dev_result[j*blockDim.x*gridDim.x+i] = accumulator;
    }
}

/******************/
/* 2D TEST KERNEL */
/******************/
__global__ void Kernel_Test_Float(float* dev_result2, float* data_d, int N1, int N2)
{
    int i = threadIdx.x + blockDim.x * blockIdx.x;
    int j = threadIdx.y + blockDim.y * blockIdx.y;

    float accumulator=0.;

    int size_x=5;
    int size_y=5;

    if((i<(N1-size_x))&&(j<(N2-size_y)))
    {
        for (int k=0; k<size_x; k++)
            for (int l=0; l<size_y; l++){
                accumulator = accumulator + data_d[(j+l)*blockDim.x*gridDim.x+(i+k)];
        }
        dev_result2[j*blockDim.x*gridDim.x+i] = accumulator;
    }
}

However, the global memory kernel results much faster than the texture memory kernel (94us vs 615us - the timing is the result of the Visual Profiler - the card is a GeForce GT 540M).

Is there anything wrong in the use I’m doing of the texture memory or global memory is indeed faster than texture being cached (L1&L2)?

I have found the paper http://math.arizona.edu/~dongbin/Publications/GPUImplementations.pdf. If my understanding is correct (see Fig. 4), for a GTX480, which has a compute capability 2.0, global memory storage is preferable as compared to texture memory storage. Should we conclude that, unless implementing (e.g.) linear interpolators (which should be faster when implemented in hardware by texture memory), global memory storage is better than texture memory storage for general purpose applications?

Thanks in advance for any comment.

I skimmed through the paper, and I think the most relevant line from that paper for you is
“Texture memory provides convenient indexing and filtering but the global memory has a higher bandwidth cache.”

One thing to note is that they mentioned that their problem has good locality, and that they rarely have cache miss rates, so what you might be seeing there is the different in texture vs global memory cache speeds. To be honest I haven’t used texture memory recently, as I generally flatten any matrices to vectors. I think texture memory was more significant back in the early days of CUDA, when global memory wasn’t cached.

To add onto this, the programming guide mentions the following as benefits of texture memory:

  • If the memory reads do not follow the access patterns that global or constant memory reads must follow to get good performance, higher bandwidth can be achieved providing that there is locality in the texture fetches or surface reads;
  • Addressing calculations are performed outside the kernel by dedicated units;
  • Packed data may be broadcast to separate variables in a single operation;
  • 8-bit and 16-bit integer input data may be optionally converted to 32 bit floating-point values in the range [0.0, 1.0] or [-1.0, 1.0]

So it looks like you’re right that in the general case, global memory will be better, while for random access texture memory might end up being faster (as global memory will presumably have high cache miss rates). Also I believe texture memory’s caching is 2D, so if you’re frequently accessing 2D tiles of data that exhibit spatial locality, it might have better cache hit rates.

I’d be interested to hear from anyone that uses texture memory on a more regular basis though, as I’m not a very authoritative figure on the matter.

I can report reaching serious speedup improvements when my reads have had:

  • Some data locality
  • Indexing benefits ( mirror mode, clamp, etc )

You can do some really cool things for filtering and interpolation by using textures!

Thank you Alrikai for your answer. I generally agree with your comment, and I also have the feeling that using textures for data locality was important for “older” GPUs. But I have a concern when you mention “random access”. Why do you think that in this case texture memory will be faster? In the case of “random” access, I would conclude that neither global nor texture momory will exploit any cache. Am I right?

Also, thank you Jimmy. I agree that texture memory can be helpful in filtering and interpolation. But here I’m discussing its usefulness when a “general” application can benefit of data locality only. Do you have any document where you observed speedups when using texture memory in a problem with data locality only (apart from filtering or interpolation)?

Yes, and since the global memory cache has higher throughput than texture memory cache, if neither get any use out of their respective caches, then their relative speeds should be the same. If anything, since texture memory has 2D caching locality, it might have better cache hit rates than global memory for random accesses; that is, the texture caching scheme can be more forgiving for access patterns that don’t follow global memory coalescing requirements.