question concerning use of multiple textures

I would like to programme a volume reconstruction algorithm based on shape silhouettes with cuda. Therefore I need to reproject a voxel volume into the image planes of 6 camera views.
I plan to use 2D textures but I am not aware of how the best access pattern looks like.
If I project neighboring voxels I have locality in the projection in the image planes. Therefore using textures seems to be resonable.
I am now wondering if it is best to read each voxel once and project it into all views at once, which would result in a single memory read for each voxel, or if I should iterate 6 times over all voxels and project them into a single view only per iteration.
I think the right access pattern is related to how multiple 2D textures are cached. I have not found any description concerning caching of multiple textures in the cuda docu and hope someone can give me some advice.
If I have 6 active textures is each texture cached individually or is there one cache for all textures which is invalidated each time I access another texture? Which is the best access pattern?

Thanks for your help

There is one texture-cache per multi-processor.
1 multi-processor can run several instances (blocks) of your kernel.
The texture cache is common to all such instances of your kernel that run on a single multiprocessors.
Usually GPU has more than 1 multiprocessor. 8800GTX has 16, for example.

So does this mean the texture cache is used for a single texture at a time and accessing another texture means invalidating the complete cache or are parts of multiple textures cached at the same time.

The fact that each multiprocessor has a separate cache does not help in my case. Because I can not have concurrent access on the same voxel, this whould have to be syncronized. As I update a probability value on each voxel for each projection.

If only a region of a single image is cached at once it seems i have to process each voxel 6 times.