Zero Copy VS Page-Locked

Hi to everybody!
I’m an Italian student and I’m new with CUDA and graphic boards and I’m doing some studies with GPU, in particular regarding latency with different types of memory and data storing.
I generate randomly a set of N data (numbers), then I launch a simple kernel that do a dot product.
Using cudaEventRecord I’m measuring latency of storing data with pageable memory (cudaMalloc), pagelocked or pinned (cudaMallocHost) and zerocopy (cudaHostGetDevicePointer)
I was expecting that zero copy would be the fastest, but for N >~ 50000, the pagelocked not mapped is faster.
Could the pagelocked be faster than zero copy? why? Or I’ve made some mistake?

Example: some times (expressed in milliseconds) with the three types of memory, for different numbers of data input(generate randomly)
the average and the error is taken with 300 different run for the same number of data
|Pageable|PageLocked____|Zero Copy

DMA from pinned memory has two slight advantages in that it (i) doesn’t need to hide the huge latency of the PCIe bus in the kernel and (ii) memory accesses are strictly sequential, allowing max. bandwidth from the SDRAM. At small sizes these these are however outweighed by the extra copy step.

Thank you very much.

But i’m still confused: I was thinking that zero copy was a type of pinned memory. shouldn’t have the same two properties of the pinned that you described me?

Yes, memory needs to be pinned for mapping it into the GPU address space (zero-copy). However, zero-copy memory cannot be accessed in a strictly linear pattern because the accesses come from multiple blocks executing in parallel with unpredictable timing. Latency is also more important for reading zero-copy memory because the memory transactions are only initiated when the kernel actually needs the data, while with DMA is started before the kernel executes.

Perfect! Thank you very much.
do you know if there are guides that fully describe the differente type of memory, and hardware and sofware architecture?
NVIDIA Cuda programming giude is too generic.

This paper reveals quite a bit of undocumented detail through reverse engineering: Demystifying GPU Microarchitecture through Microbenchmarking.

Apart from that, I’ve got my knowledge of CUDA from the Programming Guide and this forum (and my own experience with CUDA of course). But then I’ve been into chip design previously, so the CUDA concepts usually go along in my head with some mental picture of how I might have implemented them myself.