GeForce GT 730 l1 cache?

Does anyone know the size of the L1 cache on a GT 730?
DeviceQuery just gives L2 Cache Size and does not mention L1.

It does give shared memory, 48K bytes per block.
Does this also work as L1 cache in the usual way?

Many thanks
Bill

    Dr. W. B. Langdon,
    Department of Computer Science,
    University College London
    Gower Street, London WC1E 6BT, UK
    http://www.cs.ucl.ac.uk/staff/W.Langdon/

barracuda_0.7.105 http://seqbarracuda.sourceforge.net/
GI 2015 http://geneticimprovement2015.com/
choose your background Do not look at the screen. See what colour it makes your face.
A Field Guide to Genetic Programming
http://www.gp-field-guide.org.uk/
GP EM Genetic Programming and Evolvable Machines | Home
GP Bibliography The Genetic Programming Bibliography

There are multiple versions of the GT 730. At least two desktop versions, and one mobile version:

https://developer.nvidia.com/cuda-gpus

GT 730:      cc 3.5
GT 730 DDR3: cc 2.1
GT 730m:     cc 3.0

L1 cache is not printed out by deviceQuery for any GPU that I am aware of.

L1 cache on any of the above devices is selectable/split with shared memory, and there are API functions to modify the split:

16KB L1 / 48KB shared memory
48KB L1 / 16KB shared memory

on cc3.x devices only:

32KB L1 / 32KB shared memory

However the L1 cache on most Kepler devices (cc3.x) is disabled for global memory accesses:

http://docs.nvidia.com/cuda/kepler-tuning-guide/index.html#l1-cache

If you read the programming guide sections on the cc2.x and cc3.x GPU architectures, you can get more detailed information:

http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capability-2-x

Dear txbob,
Thanks for this.

deviceQuery says it is a 2.1 level device and
I don’t get any errors reported on
cudaDeviceSetCacheConfig(cudaFuncCachePreferL1);
(ie cudaGetLastError() returns cudaSuccess).
So I guess all is well and it is doing the hoped for
48KB L1 / 16KB shared memory split.

Thanks again
Bill