Hi all, I also confused by how texture cache connects with L2 cache. As we know, texture cache line is a 2D block. However, when one data is not in texture cache, it will look into L2 cache. So in L2 cache, is the cache line for texture also a 2D block? If not, how they communicate with each other? thanks.
Looking at Nsight Memory Statistics it seems like the transaction size requested from L2 is the same size as when coming out of the texture cache (32 bytes). So I’m guessing the transactions are exactly for the same data which jibes with this statement:
“Texture memory is designed for streaming fetches with a constant latency; a texture cache hit reduces device memory bandwidth usage, but not fetch latency.”
The Cuda Handbook goes into a little bit of detail of how the 2D locality might be implemented:
But I guess the key is to play with the geometry of your requests and try and raise your hit rate and lower your transactions per request.
Thanks for your reply.
I also found some documents. But they shows different situations:
- texture cache and then L2 cache.
In 4.1, it shows L1 texture cache size is 128B.
- texture L1 cache, texture L2 cache.
In H, you can find that texture L1 cache line is 32B, texture L2 cache line is 256B.
Are these two sayings the same? That is, texture L2 cache is L2 cache(the one used for global load)
Not sure why the first paper claims a 128 byte L1 cache line. The second paper seems more reliable.
So a texture fetch gets you 32 bytes at time from the cache, which gets 32 bytes at a time from L2, which gets 256 bytes at time from device memory. The actual geometry of that data is opaque, but with some tweaking you should be able to find a sweet spot.