one kernel in 2 cards

I have 2 machines and each machine have nvidia cuda enable GPU. 2 machines are connected using 100Mbps cable and I want to know that does cuda provide facilities to run same kernel in the 2 machines (inside gpu) symultaniously.

uh, no, there are no CUDA-specific features to do that, but you can certainly use MPI and CUDA from the same app.