Zero copy feature Can physical mem on 1 GPU be used?

I have asked this question in the “TESLA Drivers” thread. BUt none sees that anymore I guess. So, here we go with a new thread:

Can the zero-copy feature be used to manipulate data in a GPU by a kernel running in some other GPU. Like, I run a kernel in TESLA which calculates and updates a bitmap in my GEForce Card – that I can easily display??

I don’t understand what you’re asking. Can you rephrase your question with regard to the zero-copy specification?

Zero-copy is about updating System RAM from a kernel running in GPU, Is that right?

If it can update RAM, it can also update any system-memory space – including a memory space that is present in another GPU. Right?

Is this possible?

Hope I am clear this time.

No, because zero-copy is limited to pinned memory on the host.

My questions would be :

How about extending it for frame-buffers in other devices? They are anyway NOT paged.

All one needs to do is a “cudaMalloc()” on that device and then convert it to a system-physical-address and then use it in the kernel. It may have some uses, no?

In case of pinned memory, Does the compiler or the CUDA run-time prevent out-of-bounds access to system memory.

A bad address on PCI bus may even lock up the entire computer - meaning - a badly written kernel can crash your system.

Does CUDA 2.2 address this?